2026-03-10T08:29:30.997 INFO:root:teuthology version: 1.2.4.dev6+g1c580df7a 2026-03-10T08:29:31.001 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-10T08:29:31.019 INFO:teuthology.run:Config: archive_path: /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/964 branch: squid description: orch/cephadm/with-work/{0-distro/centos_9.stream fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_python} email: null first_in_suite: false flavor: default job_id: '964' last_in_suite: false machine_type: vps name: kyr-2026-03-10_01:00:38-orch-squid-none-default-vps no_nested_subset: false openstack: - volumes: count: 4 size: 10 os_type: centos os_version: 9.stream overrides: admin_socket: branch: squid ansible.cephlab: branch: main skip_tags: nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs vars: timezone: UTC ceph: conf: global: mon election default strategy: 3 ms bind msgr2: false ms type: async mgr: debug mgr: 20 debug ms: 1 mon: debug mon: 20 debug ms: 1 debug paxos: 20 osd: debug ms: 1 debug osd: 20 osd mclock iops capacity threshold hdd: 49000 osd shutdown pgref assert: true flavor: default log-ignorelist: - \(MDS_ALL_DOWN\) - \(MDS_UP_LESS_THAN_MAX\) - but it is still running - overall HEALTH_ - \(OSDMAP_FLAGS\) - \(PG_ - \(OSD_ - \(OBJECT_ - \(POOL_APP_NOT_ENABLED\) log-only-match: - CEPHADM_ mon_bind_msgr2: false sha1: e911bdebe5c8faa3800735d1568fcdca65db60df ceph-deploy: conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: {} cephadm: cephadm_mode: root install: ceph: extra_system_packages: deb: - python3-pytest rpm: - python3-pytest flavor: default sha1: e911bdebe5c8faa3800735d1568fcdca65db60df extra_system_packages: deb: - python3-xmltodict - python3-jmespath rpm: - bzip2 - perl-Test-Harness - python3-xmltodict - python3-jmespath selinux: allowlist: - scontext=system_u:system_r:logrotate_t:s0 - scontext=system_u:system_r:getty_t:s0 workunit: branch: tt-squid sha1: 75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b owner: kyr priority: 1000 repo: https://github.com/ceph/ceph.git roles: - - mon.a - mon.c - mgr.y - osd.0 - osd.1 - osd.2 - osd.3 - client.0 - ceph.rgw.foo.a - node-exporter.a - alertmanager.a - - mon.b - mgr.x - osd.4 - osd.5 - osd.6 - osd.7 - client.1 - prometheus.a - grafana.a - node-exporter.b - ceph.iscsi.iscsi.a seed: 8043 sha1: e911bdebe5c8faa3800735d1568fcdca65db60df sleep_before_teardown: 0 subset: 1/64 suite: orch suite_branch: tt-squid suite_path: /home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa suite_relpath: qa suite_repo: https://github.com/kshtsk/ceph.git suite_sha1: 75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b targets: vm03.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK51IiOWeilqL9RlNrNrCvpA+nMgWRK+Gr75zMzz0ySHCNC/wlRSPFkK+gZY4GCRp3SXYIygFdkVfxwUE630mfg= vm06.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBUKwGzlhI/PNJpegPRknp7tyOXyaBFBnpVJEp0Y7hGRLuNlR75qWu1/X3ve8CEwXjSEknFJGc3YtY3b1UiErQM= tasks: - pexec: all: - sudo dnf remove nvme-cli -y - sudo dnf install nvmetcli nvme-cli -y - install: null - cephadm: conf: mgr: debug mgr: 20 debug ms: 1 - workunit: clients: client.0: - rados/test_python.sh timeout: 1h teuthology: fragments_dropped: [] meta: {} postmerge: [] teuthology_branch: clyso-debian-13 teuthology_repo: https://github.com/clyso/teuthology teuthology_sha1: 1c580df7a9c7c2aadc272da296344fd99f27c444 timestamp: 2026-03-10_01:00:38 tube: vps user: kyr verbose: false worker_log: /home/teuthos/.teuthology/dispatcher/dispatcher.vps.611473 2026-03-10T08:29:31.019 INFO:teuthology.run:suite_path is set to /home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa; will attempt to use it 2026-03-10T08:29:31.019 INFO:teuthology.run:Found tasks at /home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa/tasks 2026-03-10T08:29:31.019 INFO:teuthology.run_tasks:Running task internal.check_packages... 2026-03-10T08:29:31.019 INFO:teuthology.task.internal:Checking packages... 2026-03-10T08:29:31.019 INFO:teuthology.task.internal:Checking packages for os_type 'centos', flavor 'default' and ceph hash 'e911bdebe5c8faa3800735d1568fcdca65db60df' 2026-03-10T08:29:31.019 WARNING:teuthology.packaging:More than one of ref, tag, branch, or sha1 supplied; using branch 2026-03-10T08:29:31.019 INFO:teuthology.packaging:ref: None 2026-03-10T08:29:31.019 INFO:teuthology.packaging:tag: None 2026-03-10T08:29:31.019 INFO:teuthology.packaging:branch: squid 2026-03-10T08:29:31.019 INFO:teuthology.packaging:sha1: e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T08:29:31.020 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&ref=squid 2026-03-10T08:29:31.827 INFO:teuthology.task.internal:Found packages for ceph version 19.2.3-678.ge911bdeb 2026-03-10T08:29:31.828 INFO:teuthology.run_tasks:Running task internal.buildpackages_prep... 2026-03-10T08:29:31.829 INFO:teuthology.task.internal:no buildpackages task found 2026-03-10T08:29:31.829 INFO:teuthology.run_tasks:Running task internal.save_config... 2026-03-10T08:29:31.829 INFO:teuthology.task.internal:Saving configuration 2026-03-10T08:29:31.833 INFO:teuthology.run_tasks:Running task internal.check_lock... 2026-03-10T08:29:31.834 INFO:teuthology.task.internal.check_lock:Checking locks... 2026-03-10T08:29:31.840 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm03.local', 'description': '/archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/964', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'centos', 'os_version': '9.stream', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-10 08:28:23.518009', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:03', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK51IiOWeilqL9RlNrNrCvpA+nMgWRK+Gr75zMzz0ySHCNC/wlRSPFkK+gZY4GCRp3SXYIygFdkVfxwUE630mfg='} 2026-03-10T08:29:31.844 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm06.local', 'description': '/archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/964', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'centos', 'os_version': '9.stream', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-10 08:28:23.518398', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:06', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBUKwGzlhI/PNJpegPRknp7tyOXyaBFBnpVJEp0Y7hGRLuNlR75qWu1/X3ve8CEwXjSEknFJGc3YtY3b1UiErQM='} 2026-03-10T08:29:31.844 INFO:teuthology.run_tasks:Running task internal.add_remotes... 2026-03-10T08:29:31.844 INFO:teuthology.task.internal:roles: ubuntu@vm03.local - ['mon.a', 'mon.c', 'mgr.y', 'osd.0', 'osd.1', 'osd.2', 'osd.3', 'client.0', 'ceph.rgw.foo.a', 'node-exporter.a', 'alertmanager.a'] 2026-03-10T08:29:31.844 INFO:teuthology.task.internal:roles: ubuntu@vm06.local - ['mon.b', 'mgr.x', 'osd.4', 'osd.5', 'osd.6', 'osd.7', 'client.1', 'prometheus.a', 'grafana.a', 'node-exporter.b', 'ceph.iscsi.iscsi.a'] 2026-03-10T08:29:31.844 INFO:teuthology.run_tasks:Running task console_log... 2026-03-10T08:29:31.849 DEBUG:teuthology.task.console_log:vm03 does not support IPMI; excluding 2026-03-10T08:29:31.853 DEBUG:teuthology.task.console_log:vm06 does not support IPMI; excluding 2026-03-10T08:29:31.853 DEBUG:teuthology.exit:Installing handler: Handler(exiter=, func=.kill_console_loggers at 0x7fd0352afd90>, signals=[15]) 2026-03-10T08:29:31.853 INFO:teuthology.run_tasks:Running task internal.connect... 2026-03-10T08:29:31.853 INFO:teuthology.task.internal:Opening connections... 2026-03-10T08:29:31.853 DEBUG:teuthology.task.internal:connecting to ubuntu@vm03.local 2026-03-10T08:29:31.854 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm03.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T08:29:31.911 DEBUG:teuthology.task.internal:connecting to ubuntu@vm06.local 2026-03-10T08:29:31.911 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm06.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T08:29:31.968 INFO:teuthology.run_tasks:Running task internal.push_inventory... 2026-03-10T08:29:31.969 DEBUG:teuthology.orchestra.run.vm03:> uname -m 2026-03-10T08:29:31.999 INFO:teuthology.orchestra.run.vm03.stdout:x86_64 2026-03-10T08:29:31.999 DEBUG:teuthology.orchestra.run.vm03:> cat /etc/os-release 2026-03-10T08:29:32.053 INFO:teuthology.orchestra.run.vm03.stdout:NAME="CentOS Stream" 2026-03-10T08:29:32.053 INFO:teuthology.orchestra.run.vm03.stdout:VERSION="9" 2026-03-10T08:29:32.053 INFO:teuthology.orchestra.run.vm03.stdout:ID="centos" 2026-03-10T08:29:32.053 INFO:teuthology.orchestra.run.vm03.stdout:ID_LIKE="rhel fedora" 2026-03-10T08:29:32.053 INFO:teuthology.orchestra.run.vm03.stdout:VERSION_ID="9" 2026-03-10T08:29:32.053 INFO:teuthology.orchestra.run.vm03.stdout:PLATFORM_ID="platform:el9" 2026-03-10T08:29:32.053 INFO:teuthology.orchestra.run.vm03.stdout:PRETTY_NAME="CentOS Stream 9" 2026-03-10T08:29:32.054 INFO:teuthology.orchestra.run.vm03.stdout:ANSI_COLOR="0;31" 2026-03-10T08:29:32.054 INFO:teuthology.orchestra.run.vm03.stdout:LOGO="fedora-logo-icon" 2026-03-10T08:29:32.054 INFO:teuthology.orchestra.run.vm03.stdout:CPE_NAME="cpe:/o:centos:centos:9" 2026-03-10T08:29:32.054 INFO:teuthology.orchestra.run.vm03.stdout:HOME_URL="https://centos.org/" 2026-03-10T08:29:32.054 INFO:teuthology.orchestra.run.vm03.stdout:BUG_REPORT_URL="https://issues.redhat.com/" 2026-03-10T08:29:32.054 INFO:teuthology.orchestra.run.vm03.stdout:REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 9" 2026-03-10T08:29:32.054 INFO:teuthology.orchestra.run.vm03.stdout:REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream" 2026-03-10T08:29:32.054 INFO:teuthology.lock.ops:Updating vm03.local on lock server 2026-03-10T08:29:32.059 DEBUG:teuthology.orchestra.run.vm06:> uname -m 2026-03-10T08:29:32.072 INFO:teuthology.orchestra.run.vm06.stdout:x86_64 2026-03-10T08:29:32.072 DEBUG:teuthology.orchestra.run.vm06:> cat /etc/os-release 2026-03-10T08:29:32.125 INFO:teuthology.orchestra.run.vm06.stdout:NAME="CentOS Stream" 2026-03-10T08:29:32.125 INFO:teuthology.orchestra.run.vm06.stdout:VERSION="9" 2026-03-10T08:29:32.125 INFO:teuthology.orchestra.run.vm06.stdout:ID="centos" 2026-03-10T08:29:32.125 INFO:teuthology.orchestra.run.vm06.stdout:ID_LIKE="rhel fedora" 2026-03-10T08:29:32.125 INFO:teuthology.orchestra.run.vm06.stdout:VERSION_ID="9" 2026-03-10T08:29:32.125 INFO:teuthology.orchestra.run.vm06.stdout:PLATFORM_ID="platform:el9" 2026-03-10T08:29:32.125 INFO:teuthology.orchestra.run.vm06.stdout:PRETTY_NAME="CentOS Stream 9" 2026-03-10T08:29:32.125 INFO:teuthology.orchestra.run.vm06.stdout:ANSI_COLOR="0;31" 2026-03-10T08:29:32.125 INFO:teuthology.orchestra.run.vm06.stdout:LOGO="fedora-logo-icon" 2026-03-10T08:29:32.125 INFO:teuthology.orchestra.run.vm06.stdout:CPE_NAME="cpe:/o:centos:centos:9" 2026-03-10T08:29:32.125 INFO:teuthology.orchestra.run.vm06.stdout:HOME_URL="https://centos.org/" 2026-03-10T08:29:32.125 INFO:teuthology.orchestra.run.vm06.stdout:BUG_REPORT_URL="https://issues.redhat.com/" 2026-03-10T08:29:32.125 INFO:teuthology.orchestra.run.vm06.stdout:REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 9" 2026-03-10T08:29:32.125 INFO:teuthology.orchestra.run.vm06.stdout:REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream" 2026-03-10T08:29:32.125 INFO:teuthology.lock.ops:Updating vm06.local on lock server 2026-03-10T08:29:32.129 INFO:teuthology.run_tasks:Running task internal.serialize_remote_roles... 2026-03-10T08:29:32.131 INFO:teuthology.run_tasks:Running task internal.check_conflict... 2026-03-10T08:29:32.131 INFO:teuthology.task.internal:Checking for old test directory... 2026-03-10T08:29:32.131 DEBUG:teuthology.orchestra.run.vm03:> test '!' -e /home/ubuntu/cephtest 2026-03-10T08:29:32.133 DEBUG:teuthology.orchestra.run.vm06:> test '!' -e /home/ubuntu/cephtest 2026-03-10T08:29:32.179 INFO:teuthology.run_tasks:Running task internal.check_ceph_data... 2026-03-10T08:29:32.180 INFO:teuthology.task.internal:Checking for non-empty /var/lib/ceph... 2026-03-10T08:29:32.180 DEBUG:teuthology.orchestra.run.vm03:> test -z $(ls -A /var/lib/ceph) 2026-03-10T08:29:32.188 DEBUG:teuthology.orchestra.run.vm06:> test -z $(ls -A /var/lib/ceph) 2026-03-10T08:29:32.200 INFO:teuthology.orchestra.run.vm03.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-10T08:29:32.233 INFO:teuthology.orchestra.run.vm06.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-10T08:29:32.234 INFO:teuthology.run_tasks:Running task internal.vm_setup... 2026-03-10T08:29:32.241 DEBUG:teuthology.orchestra.run.vm03:> test -e /ceph-qa-ready 2026-03-10T08:29:32.254 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T08:29:32.441 DEBUG:teuthology.orchestra.run.vm06:> test -e /ceph-qa-ready 2026-03-10T08:29:32.455 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T08:29:32.632 INFO:teuthology.run_tasks:Running task internal.base... 2026-03-10T08:29:32.633 INFO:teuthology.task.internal:Creating test directory... 2026-03-10T08:29:32.633 DEBUG:teuthology.orchestra.run.vm03:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-10T08:29:32.635 DEBUG:teuthology.orchestra.run.vm06:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-10T08:29:32.649 INFO:teuthology.run_tasks:Running task internal.archive_upload... 2026-03-10T08:29:32.650 INFO:teuthology.run_tasks:Running task internal.archive... 2026-03-10T08:29:32.651 INFO:teuthology.task.internal:Creating archive directory... 2026-03-10T08:29:32.651 DEBUG:teuthology.orchestra.run.vm03:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-10T08:29:32.690 DEBUG:teuthology.orchestra.run.vm06:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-10T08:29:32.713 INFO:teuthology.run_tasks:Running task internal.coredump... 2026-03-10T08:29:32.715 INFO:teuthology.task.internal:Enabling coredump saving... 2026-03-10T08:29:32.715 DEBUG:teuthology.orchestra.run.vm03:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-10T08:29:32.758 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T08:29:32.758 DEBUG:teuthology.orchestra.run.vm06:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-10T08:29:32.772 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T08:29:32.772 DEBUG:teuthology.orchestra.run.vm03:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-10T08:29:32.799 DEBUG:teuthology.orchestra.run.vm06:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-10T08:29:32.820 INFO:teuthology.orchestra.run.vm03.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T08:29:32.829 INFO:teuthology.orchestra.run.vm03.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T08:29:32.836 INFO:teuthology.orchestra.run.vm06.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T08:29:32.844 INFO:teuthology.orchestra.run.vm06.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T08:29:32.846 INFO:teuthology.run_tasks:Running task internal.sudo... 2026-03-10T08:29:32.847 INFO:teuthology.task.internal:Configuring sudo... 2026-03-10T08:29:32.847 DEBUG:teuthology.orchestra.run.vm03:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-10T08:29:32.872 DEBUG:teuthology.orchestra.run.vm06:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-10T08:29:32.913 INFO:teuthology.run_tasks:Running task internal.syslog... 2026-03-10T08:29:32.915 INFO:teuthology.task.internal.syslog:Starting syslog monitoring... 2026-03-10T08:29:32.915 DEBUG:teuthology.orchestra.run.vm03:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-10T08:29:32.934 DEBUG:teuthology.orchestra.run.vm06:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-10T08:29:32.970 DEBUG:teuthology.orchestra.run.vm03:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T08:29:33.008 DEBUG:teuthology.orchestra.run.vm03:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T08:29:33.064 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-10T08:29:33.064 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-10T08:29:33.121 DEBUG:teuthology.orchestra.run.vm06:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T08:29:33.145 DEBUG:teuthology.orchestra.run.vm06:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T08:29:33.201 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-10T08:29:33.201 DEBUG:teuthology.orchestra.run.vm06:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-10T08:29:33.258 DEBUG:teuthology.orchestra.run.vm03:> sudo service rsyslog restart 2026-03-10T08:29:33.260 DEBUG:teuthology.orchestra.run.vm06:> sudo service rsyslog restart 2026-03-10T08:29:33.284 INFO:teuthology.orchestra.run.vm03.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-10T08:29:33.324 INFO:teuthology.orchestra.run.vm06.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-10T08:29:33.612 INFO:teuthology.run_tasks:Running task internal.timer... 2026-03-10T08:29:33.614 INFO:teuthology.task.internal:Starting timer... 2026-03-10T08:29:33.614 INFO:teuthology.run_tasks:Running task pcp... 2026-03-10T08:29:33.617 INFO:teuthology.run_tasks:Running task selinux... 2026-03-10T08:29:33.619 DEBUG:teuthology.task:Applying overrides for task selinux: {'allowlist': ['scontext=system_u:system_r:logrotate_t:s0', 'scontext=system_u:system_r:getty_t:s0']} 2026-03-10T08:29:33.619 INFO:teuthology.task.selinux:Excluding vm03: VMs are not yet supported 2026-03-10T08:29:33.619 INFO:teuthology.task.selinux:Excluding vm06: VMs are not yet supported 2026-03-10T08:29:33.619 DEBUG:teuthology.task.selinux:Getting current SELinux state 2026-03-10T08:29:33.619 DEBUG:teuthology.task.selinux:Existing SELinux modes: {} 2026-03-10T08:29:33.619 INFO:teuthology.task.selinux:Putting SELinux into permissive mode 2026-03-10T08:29:33.619 INFO:teuthology.run_tasks:Running task ansible.cephlab... 2026-03-10T08:29:33.620 DEBUG:teuthology.task:Applying overrides for task ansible.cephlab: {'branch': 'main', 'skip_tags': 'nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs', 'vars': {'timezone': 'UTC'}} 2026-03-10T08:29:33.621 DEBUG:teuthology.repo_utils:Setting repo remote to https://github.com/ceph/ceph-cm-ansible.git 2026-03-10T08:29:33.622 INFO:teuthology.repo_utils:Fetching github.com_ceph_ceph-cm-ansible_main from origin 2026-03-10T08:29:34.108 DEBUG:teuthology.repo_utils:Resetting repo at /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main to origin/main 2026-03-10T08:29:34.113 INFO:teuthology.task.ansible:Playbook: [{'import_playbook': 'ansible_managed.yml'}, {'import_playbook': 'teuthology.yml'}, {'hosts': 'testnodes', 'tasks': [{'set_fact': {'ran_from_cephlab_playbook': True}}]}, {'import_playbook': 'testnodes.yml'}, {'import_playbook': 'container-host.yml'}, {'import_playbook': 'cobbler.yml'}, {'import_playbook': 'paddles.yml'}, {'import_playbook': 'pulpito.yml'}, {'hosts': 'testnodes', 'become': True, 'tasks': [{'name': 'Touch /ceph-qa-ready', 'file': {'path': '/ceph-qa-ready', 'state': 'touch'}, 'when': 'ran_from_cephlab_playbook|bool'}]}] 2026-03-10T08:29:34.113 DEBUG:teuthology.task.ansible:Running ansible-playbook -v --extra-vars '{"ansible_ssh_user": "ubuntu", "timezone": "UTC"}' -i /tmp/teuth_ansible_inventorybo9rjjpy --limit vm03.local,vm06.local /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main/cephlab.yml --skip-tags nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs 2026-03-10T08:31:17.750 DEBUG:teuthology.task.ansible:Reconnecting to [Remote(name='ubuntu@vm03.local'), Remote(name='ubuntu@vm06.local')] 2026-03-10T08:31:17.750 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm03.local' 2026-03-10T08:31:17.751 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm03.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T08:31:17.812 DEBUG:teuthology.orchestra.run.vm03:> true 2026-03-10T08:31:17.886 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm03.local' 2026-03-10T08:31:17.886 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm06.local' 2026-03-10T08:31:17.887 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm06.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T08:31:17.951 DEBUG:teuthology.orchestra.run.vm06:> true 2026-03-10T08:31:18.032 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm06.local' 2026-03-10T08:31:18.032 INFO:teuthology.run_tasks:Running task clock... 2026-03-10T08:31:18.035 INFO:teuthology.task.clock:Syncing clocks and checking initial clock skew... 2026-03-10T08:31:18.035 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-10T08:31:18.035 DEBUG:teuthology.orchestra.run.vm03:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T08:31:18.036 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-10T08:31:18.036 DEBUG:teuthology.orchestra.run.vm06:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T08:31:18.080 INFO:teuthology.orchestra.run.vm03.stderr:Failed to stop ntp.service: Unit ntp.service not loaded. 2026-03-10T08:31:18.099 INFO:teuthology.orchestra.run.vm03.stderr:Failed to stop ntpd.service: Unit ntpd.service not loaded. 2026-03-10T08:31:18.112 INFO:teuthology.orchestra.run.vm06.stderr:Failed to stop ntp.service: Unit ntp.service not loaded. 2026-03-10T08:31:18.129 INFO:teuthology.orchestra.run.vm03.stderr:sudo: ntpd: command not found 2026-03-10T08:31:18.130 INFO:teuthology.orchestra.run.vm06.stderr:Failed to stop ntpd.service: Unit ntpd.service not loaded. 2026-03-10T08:31:18.144 INFO:teuthology.orchestra.run.vm03.stdout:506 Cannot talk to daemon 2026-03-10T08:31:18.155 INFO:teuthology.orchestra.run.vm03.stderr:Failed to start ntp.service: Unit ntp.service not found. 2026-03-10T08:31:18.163 INFO:teuthology.orchestra.run.vm06.stderr:sudo: ntpd: command not found 2026-03-10T08:31:18.171 INFO:teuthology.orchestra.run.vm03.stderr:Failed to start ntpd.service: Unit ntpd.service not found. 2026-03-10T08:31:18.177 INFO:teuthology.orchestra.run.vm06.stdout:506 Cannot talk to daemon 2026-03-10T08:31:18.192 INFO:teuthology.orchestra.run.vm06.stderr:Failed to start ntp.service: Unit ntp.service not found. 2026-03-10T08:31:18.207 INFO:teuthology.orchestra.run.vm06.stderr:Failed to start ntpd.service: Unit ntpd.service not found. 2026-03-10T08:31:18.223 INFO:teuthology.orchestra.run.vm03.stderr:bash: line 1: ntpq: command not found 2026-03-10T08:31:18.228 INFO:teuthology.orchestra.run.vm03.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-10T08:31:18.228 INFO:teuthology.orchestra.run.vm03.stdout:=============================================================================== 2026-03-10T08:31:18.228 INFO:teuthology.orchestra.run.vm03.stdout:^? vps-nue1.orleans.ddnss.de 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T08:31:18.228 INFO:teuthology.orchestra.run.vm03.stdout:^? ntp1.lwlcom.net 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T08:31:18.228 INFO:teuthology.orchestra.run.vm03.stdout:^? stage3.opensuse.org 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T08:31:18.228 INFO:teuthology.orchestra.run.vm03.stdout:^? 139-162-156-95.ip.linode> 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T08:31:18.260 INFO:teuthology.orchestra.run.vm06.stderr:bash: line 1: ntpq: command not found 2026-03-10T08:31:18.264 INFO:teuthology.orchestra.run.vm06.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-10T08:31:18.264 INFO:teuthology.orchestra.run.vm06.stdout:=============================================================================== 2026-03-10T08:31:18.264 INFO:teuthology.orchestra.run.vm06.stdout:^? ntp1.lwlcom.net 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T08:31:18.264 INFO:teuthology.orchestra.run.vm06.stdout:^? stage3.opensuse.org 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T08:31:18.264 INFO:teuthology.orchestra.run.vm06.stdout:^? 139-162-156-95.ip.linode> 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T08:31:18.264 INFO:teuthology.orchestra.run.vm06.stdout:^? vps-nue1.orleans.ddnss.de 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T08:31:18.264 INFO:teuthology.run_tasks:Running task pexec... 2026-03-10T08:31:18.267 INFO:teuthology.task.pexec:Executing custom commands... 2026-03-10T08:31:18.267 DEBUG:teuthology.orchestra.run.vm03:> TESTDIR=/home/ubuntu/cephtest bash -s 2026-03-10T08:31:18.267 DEBUG:teuthology.orchestra.run.vm06:> TESTDIR=/home/ubuntu/cephtest bash -s 2026-03-10T08:31:18.268 DEBUG:teuthology.task.pexec:ubuntu@vm03.local< sudo dnf remove nvme-cli -y 2026-03-10T08:31:18.269 DEBUG:teuthology.task.pexec:ubuntu@vm03.local< sudo dnf install nvmetcli nvme-cli -y 2026-03-10T08:31:18.269 INFO:teuthology.task.pexec:Running commands on host ubuntu@vm03.local 2026-03-10T08:31:18.269 INFO:teuthology.task.pexec:sudo dnf remove nvme-cli -y 2026-03-10T08:31:18.269 INFO:teuthology.task.pexec:sudo dnf install nvmetcli nvme-cli -y 2026-03-10T08:31:18.306 DEBUG:teuthology.task.pexec:ubuntu@vm06.local< sudo dnf remove nvme-cli -y 2026-03-10T08:31:18.306 DEBUG:teuthology.task.pexec:ubuntu@vm06.local< sudo dnf install nvmetcli nvme-cli -y 2026-03-10T08:31:18.306 INFO:teuthology.task.pexec:Running commands on host ubuntu@vm06.local 2026-03-10T08:31:18.306 INFO:teuthology.task.pexec:sudo dnf remove nvme-cli -y 2026-03-10T08:31:18.306 INFO:teuthology.task.pexec:sudo dnf install nvmetcli nvme-cli -y 2026-03-10T08:31:18.502 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: nvme-cli 2026-03-10T08:31:18.502 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-03-10T08:31:18.505 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-10T08:31:18.506 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-03-10T08:31:18.506 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-10T08:31:18.550 INFO:teuthology.orchestra.run.vm06.stdout:No match for argument: nvme-cli 2026-03-10T08:31:18.550 INFO:teuthology.orchestra.run.vm06.stderr:No packages marked for removal. 2026-03-10T08:31:18.553 INFO:teuthology.orchestra.run.vm06.stdout:Dependencies resolved. 2026-03-10T08:31:18.554 INFO:teuthology.orchestra.run.vm06.stdout:Nothing to do. 2026-03-10T08:31:18.554 INFO:teuthology.orchestra.run.vm06.stdout:Complete! 2026-03-10T08:31:18.940 INFO:teuthology.orchestra.run.vm03.stdout:Last metadata expiration check: 0:01:11 ago on Tue 10 Mar 2026 08:30:07 AM UTC. 2026-03-10T08:31:19.028 INFO:teuthology.orchestra.run.vm06.stdout:Last metadata expiration check: 0:01:08 ago on Tue 10 Mar 2026 08:30:11 AM UTC. 2026-03-10T08:31:19.048 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-10T08:31:19.048 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-10T08:31:19.048 INFO:teuthology.orchestra.run.vm03.stdout: Package Architecture Version Repository Size 2026-03-10T08:31:19.048 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-10T08:31:19.048 INFO:teuthology.orchestra.run.vm03.stdout:Installing: 2026-03-10T08:31:19.048 INFO:teuthology.orchestra.run.vm03.stdout: nvme-cli x86_64 2.16-1.el9 baseos 1.2 M 2026-03-10T08:31:19.048 INFO:teuthology.orchestra.run.vm03.stdout: nvmetcli noarch 0.8-3.el9 baseos 44 k 2026-03-10T08:31:19.048 INFO:teuthology.orchestra.run.vm03.stdout:Installing dependencies: 2026-03-10T08:31:19.048 INFO:teuthology.orchestra.run.vm03.stdout: python3-configshell noarch 1:1.1.30-1.el9 baseos 72 k 2026-03-10T08:31:19.048 INFO:teuthology.orchestra.run.vm03.stdout: python3-kmod x86_64 0.9-32.el9 baseos 84 k 2026-03-10T08:31:19.048 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyparsing noarch 2.4.7-9.el9 baseos 150 k 2026-03-10T08:31:19.048 INFO:teuthology.orchestra.run.vm03.stdout: python3-urwid x86_64 2.1.2-4.el9 baseos 837 k 2026-03-10T08:31:19.048 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:31:19.048 INFO:teuthology.orchestra.run.vm03.stdout:Transaction Summary 2026-03-10T08:31:19.048 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-10T08:31:19.049 INFO:teuthology.orchestra.run.vm03.stdout:Install 6 Packages 2026-03-10T08:31:19.049 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:31:19.049 INFO:teuthology.orchestra.run.vm03.stdout:Total download size: 2.3 M 2026-03-10T08:31:19.049 INFO:teuthology.orchestra.run.vm03.stdout:Installed size: 11 M 2026-03-10T08:31:19.049 INFO:teuthology.orchestra.run.vm03.stdout:Downloading Packages: 2026-03-10T08:31:19.146 INFO:teuthology.orchestra.run.vm06.stdout:Dependencies resolved. 2026-03-10T08:31:19.146 INFO:teuthology.orchestra.run.vm06.stdout:================================================================================ 2026-03-10T08:31:19.146 INFO:teuthology.orchestra.run.vm06.stdout: Package Architecture Version Repository Size 2026-03-10T08:31:19.146 INFO:teuthology.orchestra.run.vm06.stdout:================================================================================ 2026-03-10T08:31:19.146 INFO:teuthology.orchestra.run.vm06.stdout:Installing: 2026-03-10T08:31:19.146 INFO:teuthology.orchestra.run.vm06.stdout: nvme-cli x86_64 2.16-1.el9 baseos 1.2 M 2026-03-10T08:31:19.146 INFO:teuthology.orchestra.run.vm06.stdout: nvmetcli noarch 0.8-3.el9 baseos 44 k 2026-03-10T08:31:19.146 INFO:teuthology.orchestra.run.vm06.stdout:Installing dependencies: 2026-03-10T08:31:19.147 INFO:teuthology.orchestra.run.vm06.stdout: python3-configshell noarch 1:1.1.30-1.el9 baseos 72 k 2026-03-10T08:31:19.147 INFO:teuthology.orchestra.run.vm06.stdout: python3-kmod x86_64 0.9-32.el9 baseos 84 k 2026-03-10T08:31:19.147 INFO:teuthology.orchestra.run.vm06.stdout: python3-pyparsing noarch 2.4.7-9.el9 baseos 150 k 2026-03-10T08:31:19.147 INFO:teuthology.orchestra.run.vm06.stdout: python3-urwid x86_64 2.1.2-4.el9 baseos 837 k 2026-03-10T08:31:19.147 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:31:19.147 INFO:teuthology.orchestra.run.vm06.stdout:Transaction Summary 2026-03-10T08:31:19.147 INFO:teuthology.orchestra.run.vm06.stdout:================================================================================ 2026-03-10T08:31:19.147 INFO:teuthology.orchestra.run.vm06.stdout:Install 6 Packages 2026-03-10T08:31:19.147 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:31:19.147 INFO:teuthology.orchestra.run.vm06.stdout:Total download size: 2.3 M 2026-03-10T08:31:19.147 INFO:teuthology.orchestra.run.vm06.stdout:Installed size: 11 M 2026-03-10T08:31:19.147 INFO:teuthology.orchestra.run.vm06.stdout:Downloading Packages: 2026-03-10T08:31:19.422 INFO:teuthology.orchestra.run.vm03.stdout:(1/6): nvmetcli-0.8-3.el9.noarch.rpm 164 kB/s | 44 kB 00:00 2026-03-10T08:31:19.450 INFO:teuthology.orchestra.run.vm03.stdout:(2/6): python3-configshell-1.1.30-1.el9.noarch. 243 kB/s | 72 kB 00:00 2026-03-10T08:31:19.476 INFO:teuthology.orchestra.run.vm06.stdout:(1/6): nvmetcli-0.8-3.el9.noarch.rpm 195 kB/s | 44 kB 00:00 2026-03-10T08:31:19.500 INFO:teuthology.orchestra.run.vm06.stdout:(2/6): python3-configshell-1.1.30-1.el9.noarch. 289 kB/s | 72 kB 00:00 2026-03-10T08:31:19.579 INFO:teuthology.orchestra.run.vm03.stdout:(3/6): python3-kmod-0.9-32.el9.x86_64.rpm 538 kB/s | 84 kB 00:00 2026-03-10T08:31:19.580 INFO:teuthology.orchestra.run.vm06.stdout:(3/6): nvme-cli-2.16-1.el9.x86_64.rpm 3.5 MB/s | 1.2 MB 00:00 2026-03-10T08:31:19.581 INFO:teuthology.orchestra.run.vm06.stdout:(4/6): python3-kmod-0.9-32.el9.x86_64.rpm 800 kB/s | 84 kB 00:00 2026-03-10T08:31:19.582 INFO:teuthology.orchestra.run.vm06.stdout:(5/6): python3-pyparsing-2.4.7-9.el9.noarch.rpm 1.8 MB/s | 150 kB 00:00 2026-03-10T08:31:19.608 INFO:teuthology.orchestra.run.vm03.stdout:(4/6): python3-pyparsing-2.4.7-9.el9.noarch.rpm 956 kB/s | 150 kB 00:00 2026-03-10T08:31:19.702 INFO:teuthology.orchestra.run.vm06.stdout:(6/6): python3-urwid-2.1.2-4.el9.x86_64.rpm 7.0 MB/s | 837 kB 00:00 2026-03-10T08:31:19.702 INFO:teuthology.orchestra.run.vm06.stdout:-------------------------------------------------------------------------------- 2026-03-10T08:31:19.702 INFO:teuthology.orchestra.run.vm06.stdout:Total 4.2 MB/s | 2.3 MB 00:00 2026-03-10T08:31:19.787 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction check 2026-03-10T08:31:19.797 INFO:teuthology.orchestra.run.vm06.stdout:Transaction check succeeded. 2026-03-10T08:31:19.797 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction test 2026-03-10T08:31:19.811 INFO:teuthology.orchestra.run.vm03.stdout:(5/6): nvme-cli-2.16-1.el9.x86_64.rpm 1.8 MB/s | 1.2 MB 00:00 2026-03-10T08:31:19.867 INFO:teuthology.orchestra.run.vm06.stdout:Transaction test succeeded. 2026-03-10T08:31:19.867 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction 2026-03-10T08:31:20.037 INFO:teuthology.orchestra.run.vm03.stdout:(6/6): python3-urwid-2.1.2-4.el9.x86_64.rpm 1.8 MB/s | 837 kB 00:00 2026-03-10T08:31:20.037 INFO:teuthology.orchestra.run.vm03.stdout:-------------------------------------------------------------------------------- 2026-03-10T08:31:20.037 INFO:teuthology.orchestra.run.vm03.stdout:Total 2.3 MB/s | 2.3 MB 00:00 2026-03-10T08:31:20.058 INFO:teuthology.orchestra.run.vm06.stdout: Preparing : 1/1 2026-03-10T08:31:20.072 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-urwid-2.1.2-4.el9.x86_64 1/6 2026-03-10T08:31:20.083 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-pyparsing-2.4.7-9.el9.noarch 2/6 2026-03-10T08:31:20.094 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-configshell-1:1.1.30-1.el9.noarch 3/6 2026-03-10T08:31:20.099 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction check 2026-03-10T08:31:20.106 INFO:teuthology.orchestra.run.vm03.stdout:Transaction check succeeded. 2026-03-10T08:31:20.106 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction test 2026-03-10T08:31:20.107 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-kmod-0.9-32.el9.x86_64 4/6 2026-03-10T08:31:20.116 INFO:teuthology.orchestra.run.vm06.stdout: Installing : nvmetcli-0.8-3.el9.noarch 5/6 2026-03-10T08:31:20.159 INFO:teuthology.orchestra.run.vm03.stdout:Transaction test succeeded. 2026-03-10T08:31:20.159 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction 2026-03-10T08:31:20.305 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: nvmetcli-0.8-3.el9.noarch 5/6 2026-03-10T08:31:20.314 INFO:teuthology.orchestra.run.vm03.stdout: Preparing : 1/1 2026-03-10T08:31:20.317 INFO:teuthology.orchestra.run.vm06.stdout: Installing : nvme-cli-2.16-1.el9.x86_64 6/6 2026-03-10T08:31:20.325 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-urwid-2.1.2-4.el9.x86_64 1/6 2026-03-10T08:31:20.340 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-pyparsing-2.4.7-9.el9.noarch 2/6 2026-03-10T08:31:20.348 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-configshell-1:1.1.30-1.el9.noarch 3/6 2026-03-10T08:31:20.355 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-kmod-0.9-32.el9.x86_64 4/6 2026-03-10T08:31:20.356 INFO:teuthology.orchestra.run.vm03.stdout: Installing : nvmetcli-0.8-3.el9.noarch 5/6 2026-03-10T08:31:20.524 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: nvmetcli-0.8-3.el9.noarch 5/6 2026-03-10T08:31:20.529 INFO:teuthology.orchestra.run.vm03.stdout: Installing : nvme-cli-2.16-1.el9.x86_64 6/6 2026-03-10T08:31:20.755 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: nvme-cli-2.16-1.el9.x86_64 6/6 2026-03-10T08:31:20.755 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /usr/lib/systemd/system/nvmefc-boot-connections.service. 2026-03-10T08:31:20.755 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:31:20.889 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: nvme-cli-2.16-1.el9.x86_64 6/6 2026-03-10T08:31:20.889 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /usr/lib/systemd/system/nvmefc-boot-connections.service. 2026-03-10T08:31:20.889 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:31:21.285 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : nvme-cli-2.16-1.el9.x86_64 1/6 2026-03-10T08:31:21.286 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : nvmetcli-0.8-3.el9.noarch 2/6 2026-03-10T08:31:21.286 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-configshell-1:1.1.30-1.el9.noarch 3/6 2026-03-10T08:31:21.286 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-kmod-0.9-32.el9.x86_64 4/6 2026-03-10T08:31:21.286 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-pyparsing-2.4.7-9.el9.noarch 5/6 2026-03-10T08:31:21.527 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-urwid-2.1.2-4.el9.x86_64 6/6 2026-03-10T08:31:21.527 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:31:21.527 INFO:teuthology.orchestra.run.vm06.stdout:Installed: 2026-03-10T08:31:21.527 INFO:teuthology.orchestra.run.vm06.stdout: nvme-cli-2.16-1.el9.x86_64 nvmetcli-0.8-3.el9.noarch 2026-03-10T08:31:21.527 INFO:teuthology.orchestra.run.vm06.stdout: python3-configshell-1:1.1.30-1.el9.noarch python3-kmod-0.9-32.el9.x86_64 2026-03-10T08:31:21.527 INFO:teuthology.orchestra.run.vm06.stdout: python3-pyparsing-2.4.7-9.el9.noarch python3-urwid-2.1.2-4.el9.x86_64 2026-03-10T08:31:21.527 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:31:21.527 INFO:teuthology.orchestra.run.vm06.stdout:Complete! 2026-03-10T08:31:21.644 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : nvme-cli-2.16-1.el9.x86_64 1/6 2026-03-10T08:31:21.644 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : nvmetcli-0.8-3.el9.noarch 2/6 2026-03-10T08:31:21.644 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-configshell-1:1.1.30-1.el9.noarch 3/6 2026-03-10T08:31:21.644 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-kmod-0.9-32.el9.x86_64 4/6 2026-03-10T08:31:21.644 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-pyparsing-2.4.7-9.el9.noarch 5/6 2026-03-10T08:31:21.645 DEBUG:teuthology.parallel:result is None 2026-03-10T08:31:21.743 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-urwid-2.1.2-4.el9.x86_64 6/6 2026-03-10T08:31:21.743 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:31:21.743 INFO:teuthology.orchestra.run.vm03.stdout:Installed: 2026-03-10T08:31:21.743 INFO:teuthology.orchestra.run.vm03.stdout: nvme-cli-2.16-1.el9.x86_64 nvmetcli-0.8-3.el9.noarch 2026-03-10T08:31:21.743 INFO:teuthology.orchestra.run.vm03.stdout: python3-configshell-1:1.1.30-1.el9.noarch python3-kmod-0.9-32.el9.x86_64 2026-03-10T08:31:21.743 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyparsing-2.4.7-9.el9.noarch python3-urwid-2.1.2-4.el9.x86_64 2026-03-10T08:31:21.743 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:31:21.743 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-10T08:31:21.794 DEBUG:teuthology.parallel:result is None 2026-03-10T08:31:21.794 INFO:teuthology.run_tasks:Running task install... 2026-03-10T08:31:21.797 DEBUG:teuthology.task.install:project ceph 2026-03-10T08:31:21.797 DEBUG:teuthology.task.install:INSTALL overrides: {'ceph': {'extra_system_packages': {'deb': ['python3-pytest'], 'rpm': ['python3-pytest']}, 'flavor': 'default', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df'}, 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}} 2026-03-10T08:31:21.797 DEBUG:teuthology.task.install:config {'extra_system_packages': {'deb': ['python3-pytest', 'python3-xmltodict', 'python3-jmespath'], 'rpm': ['python3-pytest', 'bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}, 'flavor': 'default', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df'} 2026-03-10T08:31:21.797 INFO:teuthology.task.install:Using flavor: default 2026-03-10T08:31:21.799 DEBUG:teuthology.task.install:Package list is: {'deb': ['ceph', 'cephadm', 'ceph-mds', 'ceph-mgr', 'ceph-common', 'ceph-fuse', 'ceph-test', 'ceph-volume', 'radosgw', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'libcephfs2', 'libcephfs-dev', 'librados2', 'librbd1', 'rbd-fuse'], 'rpm': ['ceph-radosgw', 'ceph-test', 'ceph', 'ceph-base', 'cephadm', 'ceph-immutable-object-cache', 'ceph-mgr', 'ceph-mgr-dashboard', 'ceph-mgr-diskprediction-local', 'ceph-mgr-rook', 'ceph-mgr-cephadm', 'ceph-fuse', 'ceph-volume', 'librados-devel', 'libcephfs2', 'libcephfs-devel', 'librados2', 'librbd1', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'rbd-fuse', 'rbd-mirror', 'rbd-nbd']} 2026-03-10T08:31:21.799 INFO:teuthology.task.install:extra packages: [] 2026-03-10T08:31:21.799 DEBUG:teuthology.task.install.rpm:_update_package_list_and_install: config is {'branch': None, 'cleanup': None, 'debuginfo': None, 'downgrade_packages': [], 'exclude_packages': [], 'extra_packages': [], 'extra_system_packages': {'deb': ['python3-pytest', 'python3-xmltodict', 'python3-jmespath'], 'rpm': ['python3-pytest', 'bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}, 'extras': None, 'enable_coprs': [], 'flavor': 'default', 'install_ceph_packages': True, 'packages': {}, 'project': 'ceph', 'repos_only': False, 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'tag': None, 'wait_for_package': False} 2026-03-10T08:31:21.799 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T08:31:21.800 DEBUG:teuthology.task.install.rpm:_update_package_list_and_install: config is {'branch': None, 'cleanup': None, 'debuginfo': None, 'downgrade_packages': [], 'exclude_packages': [], 'extra_packages': [], 'extra_system_packages': {'deb': ['python3-pytest', 'python3-xmltodict', 'python3-jmespath'], 'rpm': ['python3-pytest', 'bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}, 'extras': None, 'enable_coprs': [], 'flavor': 'default', 'install_ceph_packages': True, 'packages': {}, 'project': 'ceph', 'repos_only': False, 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'tag': None, 'wait_for_package': False} 2026-03-10T08:31:21.800 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T08:31:22.472 INFO:teuthology.task.install.rpm:Pulling from https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/ 2026-03-10T08:31:22.472 INFO:teuthology.task.install.rpm:Package version is 19.2.3-678.ge911bdeb 2026-03-10T08:31:22.535 INFO:teuthology.task.install.rpm:Pulling from https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/ 2026-03-10T08:31:22.535 INFO:teuthology.task.install.rpm:Package version is 19.2.3-678.ge911bdeb 2026-03-10T08:31:23.023 INFO:teuthology.packaging:Writing yum repo: [ceph] name=ceph packages for $basearch baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/$basearch enabled=1 gpgcheck=0 type=rpm-md [ceph-noarch] name=ceph noarch packages baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/noarch enabled=1 gpgcheck=0 type=rpm-md [ceph-source] name=ceph source packages baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/SRPMS enabled=1 gpgcheck=0 type=rpm-md 2026-03-10T08:31:23.023 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-10T08:31:23.023 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/etc/yum.repos.d/ceph.repo 2026-03-10T08:31:23.040 INFO:teuthology.packaging:Writing yum repo: [ceph] name=ceph packages for $basearch baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/$basearch enabled=1 gpgcheck=0 type=rpm-md [ceph-noarch] name=ceph noarch packages baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/noarch enabled=1 gpgcheck=0 type=rpm-md [ceph-source] name=ceph source packages baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/SRPMS enabled=1 gpgcheck=0 type=rpm-md 2026-03-10T08:31:23.040 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-10T08:31:23.040 DEBUG:teuthology.orchestra.run.vm06:> sudo dd of=/etc/yum.repos.d/ceph.repo 2026-03-10T08:31:23.060 INFO:teuthology.task.install.rpm:Installing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd, python3-pytest, bzip2, perl-Test-Harness, python3-xmltodict, python3-jmespath on remote rpm x86_64 2026-03-10T08:31:23.060 DEBUG:teuthology.orchestra.run.vm03:> if test -f /etc/yum.repos.d/ceph.repo ; then sudo sed -i -e ':a;N;$!ba;s/enabled=1\ngpg/enabled=1\npriority=1\ngpg/g' -e 's;ref/[a-zA-Z0-9_-]*/;sha1/e911bdebe5c8faa3800735d1568fcdca65db60df/;g' /etc/yum.repos.d/ceph.repo ; fi 2026-03-10T08:31:23.080 INFO:teuthology.task.install.rpm:Installing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd, python3-pytest, bzip2, perl-Test-Harness, python3-xmltodict, python3-jmespath on remote rpm x86_64 2026-03-10T08:31:23.080 DEBUG:teuthology.orchestra.run.vm06:> if test -f /etc/yum.repos.d/ceph.repo ; then sudo sed -i -e ':a;N;$!ba;s/enabled=1\ngpg/enabled=1\npriority=1\ngpg/g' -e 's;ref/[a-zA-Z0-9_-]*/;sha1/e911bdebe5c8faa3800735d1568fcdca65db60df/;g' /etc/yum.repos.d/ceph.repo ; fi 2026-03-10T08:31:23.135 DEBUG:teuthology.orchestra.run.vm03:> sudo touch -a /etc/yum/pluginconf.d/priorities.conf ; test -e /etc/yum/pluginconf.d/priorities.conf.orig || sudo cp -af /etc/yum/pluginconf.d/priorities.conf /etc/yum/pluginconf.d/priorities.conf.orig 2026-03-10T08:31:23.160 DEBUG:teuthology.orchestra.run.vm06:> sudo touch -a /etc/yum/pluginconf.d/priorities.conf ; test -e /etc/yum/pluginconf.d/priorities.conf.orig || sudo cp -af /etc/yum/pluginconf.d/priorities.conf /etc/yum/pluginconf.d/priorities.conf.orig 2026-03-10T08:31:23.211 DEBUG:teuthology.orchestra.run.vm03:> grep check_obsoletes /etc/yum/pluginconf.d/priorities.conf && sudo sed -i 's/check_obsoletes.*0/check_obsoletes = 1/g' /etc/yum/pluginconf.d/priorities.conf || echo 'check_obsoletes = 1' | sudo tee -a /etc/yum/pluginconf.d/priorities.conf 2026-03-10T08:31:23.246 DEBUG:teuthology.orchestra.run.vm06:> grep check_obsoletes /etc/yum/pluginconf.d/priorities.conf && sudo sed -i 's/check_obsoletes.*0/check_obsoletes = 1/g' /etc/yum/pluginconf.d/priorities.conf || echo 'check_obsoletes = 1' | sudo tee -a /etc/yum/pluginconf.d/priorities.conf 2026-03-10T08:31:23.283 INFO:teuthology.orchestra.run.vm06.stdout:check_obsoletes = 1 2026-03-10T08:31:23.284 INFO:teuthology.orchestra.run.vm03.stdout:check_obsoletes = 1 2026-03-10T08:31:23.286 DEBUG:teuthology.orchestra.run.vm03:> sudo yum clean all 2026-03-10T08:31:23.287 DEBUG:teuthology.orchestra.run.vm06:> sudo yum clean all 2026-03-10T08:31:23.470 INFO:teuthology.orchestra.run.vm03.stdout:41 files removed 2026-03-10T08:31:23.495 INFO:teuthology.orchestra.run.vm06.stdout:41 files removed 2026-03-10T08:31:23.496 DEBUG:teuthology.orchestra.run.vm03:> sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd python3-pytest bzip2 perl-Test-Harness python3-xmltodict python3-jmespath 2026-03-10T08:31:23.529 DEBUG:teuthology.orchestra.run.vm06:> sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd python3-pytest bzip2 perl-Test-Harness python3-xmltodict python3-jmespath 2026-03-10T08:31:24.922 INFO:teuthology.orchestra.run.vm06.stdout:ceph packages for x86_64 72 kB/s | 84 kB 00:01 2026-03-10T08:31:24.932 INFO:teuthology.orchestra.run.vm03.stdout:ceph packages for x86_64 66 kB/s | 84 kB 00:01 2026-03-10T08:31:26.040 INFO:teuthology.orchestra.run.vm06.stdout:ceph noarch packages 11 kB/s | 12 kB 00:01 2026-03-10T08:31:26.041 INFO:teuthology.orchestra.run.vm03.stdout:ceph noarch packages 11 kB/s | 12 kB 00:01 2026-03-10T08:31:26.991 INFO:teuthology.orchestra.run.vm03.stdout:ceph source packages 2.0 kB/s | 1.9 kB 00:00 2026-03-10T08:31:27.021 INFO:teuthology.orchestra.run.vm06.stdout:ceph source packages 2.0 kB/s | 1.9 kB 00:00 2026-03-10T08:31:27.579 INFO:teuthology.orchestra.run.vm03.stdout:CentOS Stream 9 - BaseOS 16 MB/s | 8.9 MB 00:00 2026-03-10T08:31:28.242 INFO:teuthology.orchestra.run.vm06.stdout:CentOS Stream 9 - BaseOS 7.4 MB/s | 8.9 MB 00:01 2026-03-10T08:31:30.648 INFO:teuthology.orchestra.run.vm03.stdout:CentOS Stream 9 - AppStream 11 MB/s | 27 MB 00:02 2026-03-10T08:31:31.457 INFO:teuthology.orchestra.run.vm06.stdout:CentOS Stream 9 - AppStream 11 MB/s | 27 MB 00:02 2026-03-10T08:31:34.412 INFO:teuthology.orchestra.run.vm03.stdout:CentOS Stream 9 - CRB 8.2 MB/s | 8.0 MB 00:00 2026-03-10T08:31:35.924 INFO:teuthology.orchestra.run.vm03.stdout:CentOS Stream 9 - Extras packages 29 kB/s | 20 kB 00:00 2026-03-10T08:31:36.522 INFO:teuthology.orchestra.run.vm06.stdout:CentOS Stream 9 - CRB 4.6 MB/s | 8.0 MB 00:01 2026-03-10T08:31:37.670 INFO:teuthology.orchestra.run.vm03.stdout:Extra Packages for Enterprise Linux 12 MB/s | 20 MB 00:01 2026-03-10T08:31:37.801 INFO:teuthology.orchestra.run.vm06.stdout:CentOS Stream 9 - Extras packages 51 kB/s | 20 kB 00:00 2026-03-10T08:31:38.783 INFO:teuthology.orchestra.run.vm06.stdout:Extra Packages for Enterprise Linux 23 MB/s | 20 MB 00:00 2026-03-10T08:31:42.242 INFO:teuthology.orchestra.run.vm03.stdout:lab-extras 65 kB/s | 50 kB 00:00 2026-03-10T08:31:43.456 INFO:teuthology.orchestra.run.vm06.stdout:lab-extras 64 kB/s | 50 kB 00:00 2026-03-10T08:31:43.601 INFO:teuthology.orchestra.run.vm03.stdout:Package librados2-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-10T08:31:43.602 INFO:teuthology.orchestra.run.vm03.stdout:Package librbd1-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-10T08:31:43.606 INFO:teuthology.orchestra.run.vm03.stdout:Package bzip2-1.0.8-11.el9.x86_64 is already installed. 2026-03-10T08:31:43.606 INFO:teuthology.orchestra.run.vm03.stdout:Package perl-Test-Harness-1:3.42-461.el9.noarch is already installed. 2026-03-10T08:31:43.634 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-10T08:31:43.639 INFO:teuthology.orchestra.run.vm03.stdout:====================================================================================== 2026-03-10T08:31:43.639 INFO:teuthology.orchestra.run.vm03.stdout: Package Arch Version Repository Size 2026-03-10T08:31:43.640 INFO:teuthology.orchestra.run.vm03.stdout:====================================================================================== 2026-03-10T08:31:43.640 INFO:teuthology.orchestra.run.vm03.stdout:Installing: 2026-03-10T08:31:43.640 INFO:teuthology.orchestra.run.vm03.stdout: ceph x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 6.5 k 2026-03-10T08:31:43.640 INFO:teuthology.orchestra.run.vm03.stdout: ceph-base x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 5.5 M 2026-03-10T08:31:43.640 INFO:teuthology.orchestra.run.vm03.stdout: ceph-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.2 M 2026-03-10T08:31:43.640 INFO:teuthology.orchestra.run.vm03.stdout: ceph-immutable-object-cache x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 145 k 2026-03-10T08:31:43.640 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.1 M 2026-03-10T08:31:43.640 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-cephadm noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 150 k 2026-03-10T08:31:43.640 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-dashboard noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 3.8 M 2026-03-10T08:31:43.640 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-diskprediction-local noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 7.4 M 2026-03-10T08:31:43.640 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-rook noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 49 k 2026-03-10T08:31:43.640 INFO:teuthology.orchestra.run.vm03.stdout: ceph-radosgw x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 11 M 2026-03-10T08:31:43.640 INFO:teuthology.orchestra.run.vm03.stdout: ceph-test x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 50 M 2026-03-10T08:31:43.640 INFO:teuthology.orchestra.run.vm03.stdout: ceph-volume noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 299 k 2026-03-10T08:31:43.640 INFO:teuthology.orchestra.run.vm03.stdout: cephadm noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 769 k 2026-03-10T08:31:43.640 INFO:teuthology.orchestra.run.vm03.stdout: libcephfs-devel x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 34 k 2026-03-10T08:31:43.640 INFO:teuthology.orchestra.run.vm03.stdout: libcephfs2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.0 M 2026-03-10T08:31:43.640 INFO:teuthology.orchestra.run.vm03.stdout: librados-devel x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 127 k 2026-03-10T08:31:43.640 INFO:teuthology.orchestra.run.vm03.stdout: python3-cephfs x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 165 k 2026-03-10T08:31:43.640 INFO:teuthology.orchestra.run.vm03.stdout: python3-jmespath noarch 1.0.1-1.el9 appstream 48 k 2026-03-10T08:31:43.640 INFO:teuthology.orchestra.run.vm03.stdout: python3-pytest noarch 6.2.2-7.el9 appstream 519 k 2026-03-10T08:31:43.640 INFO:teuthology.orchestra.run.vm03.stdout: python3-rados x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 323 k 2026-03-10T08:31:43.640 INFO:teuthology.orchestra.run.vm03.stdout: python3-rbd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 303 k 2026-03-10T08:31:43.640 INFO:teuthology.orchestra.run.vm03.stdout: python3-rgw x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 100 k 2026-03-10T08:31:43.640 INFO:teuthology.orchestra.run.vm03.stdout: python3-xmltodict noarch 0.12.0-15.el9 epel 22 k 2026-03-10T08:31:43.640 INFO:teuthology.orchestra.run.vm03.stdout: rbd-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 85 k 2026-03-10T08:31:43.640 INFO:teuthology.orchestra.run.vm03.stdout: rbd-mirror x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.1 M 2026-03-10T08:31:43.640 INFO:teuthology.orchestra.run.vm03.stdout: rbd-nbd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 171 k 2026-03-10T08:31:43.640 INFO:teuthology.orchestra.run.vm03.stdout:Upgrading: 2026-03-10T08:31:43.640 INFO:teuthology.orchestra.run.vm03.stdout: librados2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.4 M 2026-03-10T08:31:43.640 INFO:teuthology.orchestra.run.vm03.stdout: librbd1 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.2 M 2026-03-10T08:31:43.640 INFO:teuthology.orchestra.run.vm03.stdout:Installing dependencies: 2026-03-10T08:31:43.640 INFO:teuthology.orchestra.run.vm03.stdout: abseil-cpp x86_64 20211102.0-4.el9 epel 551 k 2026-03-10T08:31:43.640 INFO:teuthology.orchestra.run.vm03.stdout: boost-program-options x86_64 1.75.0-13.el9 appstream 104 k 2026-03-10T08:31:43.640 INFO:teuthology.orchestra.run.vm03.stdout: ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 22 M 2026-03-10T08:31:43.640 INFO:teuthology.orchestra.run.vm03.stdout: ceph-grafana-dashboards noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 31 k 2026-03-10T08:31:43.640 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mds x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 2.4 M 2026-03-10T08:31:43.640 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 253 k 2026-03-10T08:31:43.640 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mon x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 4.7 M 2026-03-10T08:31:43.640 INFO:teuthology.orchestra.run.vm03.stdout: ceph-osd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 17 M 2026-03-10T08:31:43.641 INFO:teuthology.orchestra.run.vm03.stdout: ceph-prometheus-alerts noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 17 k 2026-03-10T08:31:43.641 INFO:teuthology.orchestra.run.vm03.stdout: ceph-selinux x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 25 k 2026-03-10T08:31:43.641 INFO:teuthology.orchestra.run.vm03.stdout: cryptsetup x86_64 2.8.1-3.el9 baseos 351 k 2026-03-10T08:31:43.641 INFO:teuthology.orchestra.run.vm03.stdout: flexiblas x86_64 3.0.4-9.el9 appstream 30 k 2026-03-10T08:31:43.641 INFO:teuthology.orchestra.run.vm03.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 appstream 3.0 M 2026-03-10T08:31:43.641 INFO:teuthology.orchestra.run.vm03.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 appstream 15 k 2026-03-10T08:31:43.641 INFO:teuthology.orchestra.run.vm03.stdout: gperftools-libs x86_64 2.9.1-3.el9 epel 308 k 2026-03-10T08:31:43.641 INFO:teuthology.orchestra.run.vm03.stdout: grpc-data noarch 1.46.7-10.el9 epel 19 k 2026-03-10T08:31:43.641 INFO:teuthology.orchestra.run.vm03.stdout: ledmon-libs x86_64 1.1.0-3.el9 baseos 40 k 2026-03-10T08:31:43.641 INFO:teuthology.orchestra.run.vm03.stdout: libarrow x86_64 9.0.0-15.el9 epel 4.4 M 2026-03-10T08:31:43.641 INFO:teuthology.orchestra.run.vm03.stdout: libarrow-doc noarch 9.0.0-15.el9 epel 25 k 2026-03-10T08:31:43.641 INFO:teuthology.orchestra.run.vm03.stdout: libcephsqlite x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 163 k 2026-03-10T08:31:43.641 INFO:teuthology.orchestra.run.vm03.stdout: libconfig x86_64 1.7.2-9.el9 baseos 72 k 2026-03-10T08:31:43.641 INFO:teuthology.orchestra.run.vm03.stdout: libgfortran x86_64 11.5.0-14.el9 baseos 794 k 2026-03-10T08:31:43.641 INFO:teuthology.orchestra.run.vm03.stdout: libnbd x86_64 1.20.3-4.el9 appstream 164 k 2026-03-10T08:31:43.641 INFO:teuthology.orchestra.run.vm03.stdout: liboath x86_64 2.6.12-1.el9 epel 49 k 2026-03-10T08:31:43.641 INFO:teuthology.orchestra.run.vm03.stdout: libpmemobj x86_64 1.12.1-1.el9 appstream 160 k 2026-03-10T08:31:43.641 INFO:teuthology.orchestra.run.vm03.stdout: libquadmath x86_64 11.5.0-14.el9 baseos 184 k 2026-03-10T08:31:43.641 INFO:teuthology.orchestra.run.vm03.stdout: librabbitmq x86_64 0.11.0-7.el9 appstream 45 k 2026-03-10T08:31:43.641 INFO:teuthology.orchestra.run.vm03.stdout: libradosstriper1 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 503 k 2026-03-10T08:31:43.641 INFO:teuthology.orchestra.run.vm03.stdout: librdkafka x86_64 1.6.1-102.el9 appstream 662 k 2026-03-10T08:31:43.641 INFO:teuthology.orchestra.run.vm03.stdout: librgw2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 5.4 M 2026-03-10T08:31:43.641 INFO:teuthology.orchestra.run.vm03.stdout: libstoragemgmt x86_64 1.10.1-1.el9 appstream 246 k 2026-03-10T08:31:43.641 INFO:teuthology.orchestra.run.vm03.stdout: libunwind x86_64 1.6.2-1.el9 epel 67 k 2026-03-10T08:31:43.641 INFO:teuthology.orchestra.run.vm03.stdout: libxslt x86_64 1.1.34-12.el9 appstream 233 k 2026-03-10T08:31:43.641 INFO:teuthology.orchestra.run.vm03.stdout: lttng-ust x86_64 2.12.0-6.el9 appstream 292 k 2026-03-10T08:31:43.641 INFO:teuthology.orchestra.run.vm03.stdout: lua x86_64 5.4.4-4.el9 appstream 188 k 2026-03-10T08:31:43.641 INFO:teuthology.orchestra.run.vm03.stdout: lua-devel x86_64 5.4.4-4.el9 crb 22 k 2026-03-10T08:31:43.641 INFO:teuthology.orchestra.run.vm03.stdout: luarocks noarch 3.9.2-5.el9 epel 151 k 2026-03-10T08:31:43.641 INFO:teuthology.orchestra.run.vm03.stdout: mailcap noarch 2.1.49-5.el9 baseos 33 k 2026-03-10T08:31:43.641 INFO:teuthology.orchestra.run.vm03.stdout: openblas x86_64 0.3.29-1.el9 appstream 42 k 2026-03-10T08:31:43.641 INFO:teuthology.orchestra.run.vm03.stdout: openblas-openmp x86_64 0.3.29-1.el9 appstream 5.3 M 2026-03-10T08:31:43.641 INFO:teuthology.orchestra.run.vm03.stdout: parquet-libs x86_64 9.0.0-15.el9 epel 838 k 2026-03-10T08:31:43.641 INFO:teuthology.orchestra.run.vm03.stdout: pciutils x86_64 3.7.0-7.el9 baseos 93 k 2026-03-10T08:31:43.641 INFO:teuthology.orchestra.run.vm03.stdout: protobuf x86_64 3.14.0-17.el9 appstream 1.0 M 2026-03-10T08:31:43.641 INFO:teuthology.orchestra.run.vm03.stdout: protobuf-compiler x86_64 3.14.0-17.el9 crb 862 k 2026-03-10T08:31:43.641 INFO:teuthology.orchestra.run.vm03.stdout: python3-asyncssh noarch 2.13.2-5.el9 epel 548 k 2026-03-10T08:31:43.641 INFO:teuthology.orchestra.run.vm03.stdout: python3-autocommand noarch 2.2.2-8.el9 epel 29 k 2026-03-10T08:31:43.641 INFO:teuthology.orchestra.run.vm03.stdout: python3-babel noarch 2.9.1-2.el9 appstream 6.0 M 2026-03-10T08:31:43.641 INFO:teuthology.orchestra.run.vm03.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 epel 60 k 2026-03-10T08:31:43.641 INFO:teuthology.orchestra.run.vm03.stdout: python3-bcrypt x86_64 3.2.2-1.el9 epel 43 k 2026-03-10T08:31:43.641 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools noarch 4.2.4-1.el9 epel 32 k 2026-03-10T08:31:43.642 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-argparse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 45 k 2026-03-10T08:31:43.642 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 142 k 2026-03-10T08:31:43.642 INFO:teuthology.orchestra.run.vm03.stdout: python3-certifi noarch 2023.05.07-4.el9 epel 14 k 2026-03-10T08:31:43.642 INFO:teuthology.orchestra.run.vm03.stdout: python3-cffi x86_64 1.14.5-5.el9 baseos 253 k 2026-03-10T08:31:43.642 INFO:teuthology.orchestra.run.vm03.stdout: python3-cheroot noarch 10.0.1-4.el9 epel 173 k 2026-03-10T08:31:43.642 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy noarch 18.6.1-2.el9 epel 358 k 2026-03-10T08:31:43.642 INFO:teuthology.orchestra.run.vm03.stdout: python3-cryptography x86_64 36.0.1-5.el9 baseos 1.2 M 2026-03-10T08:31:43.642 INFO:teuthology.orchestra.run.vm03.stdout: python3-devel x86_64 3.9.25-3.el9 appstream 244 k 2026-03-10T08:31:43.642 INFO:teuthology.orchestra.run.vm03.stdout: python3-google-auth noarch 1:2.45.0-1.el9 epel 254 k 2026-03-10T08:31:43.642 INFO:teuthology.orchestra.run.vm03.stdout: python3-grpcio x86_64 1.46.7-10.el9 epel 2.0 M 2026-03-10T08:31:43.642 INFO:teuthology.orchestra.run.vm03.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 epel 144 k 2026-03-10T08:31:43.642 INFO:teuthology.orchestra.run.vm03.stdout: python3-iniconfig noarch 1.1.1-7.el9 appstream 17 k 2026-03-10T08:31:43.642 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco noarch 8.2.1-3.el9 epel 11 k 2026-03-10T08:31:43.642 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 epel 18 k 2026-03-10T08:31:43.642 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 epel 23 k 2026-03-10T08:31:43.642 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-context noarch 6.0.1-3.el9 epel 20 k 2026-03-10T08:31:43.642 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 epel 19 k 2026-03-10T08:31:43.642 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-text noarch 4.0.0-2.el9 epel 26 k 2026-03-10T08:31:43.642 INFO:teuthology.orchestra.run.vm03.stdout: python3-jinja2 noarch 2.11.3-8.el9 appstream 249 k 2026-03-10T08:31:43.642 INFO:teuthology.orchestra.run.vm03.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 epel 1.0 M 2026-03-10T08:31:43.642 INFO:teuthology.orchestra.run.vm03.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 appstream 177 k 2026-03-10T08:31:43.642 INFO:teuthology.orchestra.run.vm03.stdout: python3-logutils noarch 0.3.5-21.el9 epel 46 k 2026-03-10T08:31:43.642 INFO:teuthology.orchestra.run.vm03.stdout: python3-mako noarch 1.1.4-6.el9 appstream 172 k 2026-03-10T08:31:43.642 INFO:teuthology.orchestra.run.vm03.stdout: python3-markupsafe x86_64 1.1.1-12.el9 appstream 35 k 2026-03-10T08:31:43.642 INFO:teuthology.orchestra.run.vm03.stdout: python3-more-itertools noarch 8.12.0-2.el9 epel 79 k 2026-03-10T08:31:43.642 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort noarch 7.1.1-5.el9 epel 58 k 2026-03-10T08:31:43.642 INFO:teuthology.orchestra.run.vm03.stdout: python3-numpy x86_64 1:1.23.5-2.el9 appstream 6.1 M 2026-03-10T08:31:43.642 INFO:teuthology.orchestra.run.vm03.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 appstream 442 k 2026-03-10T08:31:43.642 INFO:teuthology.orchestra.run.vm03.stdout: python3-packaging noarch 20.9-5.el9 appstream 77 k 2026-03-10T08:31:43.642 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan noarch 1.4.2-3.el9 epel 272 k 2026-03-10T08:31:43.642 INFO:teuthology.orchestra.run.vm03.stdout: python3-pluggy noarch 0.13.1-7.el9 appstream 41 k 2026-03-10T08:31:43.642 INFO:teuthology.orchestra.run.vm03.stdout: python3-ply noarch 3.11-14.el9 baseos 106 k 2026-03-10T08:31:43.642 INFO:teuthology.orchestra.run.vm03.stdout: python3-portend noarch 3.1.0-2.el9 epel 16 k 2026-03-10T08:31:43.642 INFO:teuthology.orchestra.run.vm03.stdout: python3-protobuf noarch 3.14.0-17.el9 appstream 267 k 2026-03-10T08:31:43.642 INFO:teuthology.orchestra.run.vm03.stdout: python3-py noarch 1.10.0-6.el9 appstream 477 k 2026-03-10T08:31:43.642 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 epel 90 k 2026-03-10T08:31:43.642 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyasn1 noarch 0.4.8-7.el9 appstream 157 k 2026-03-10T08:31:43.642 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 appstream 277 k 2026-03-10T08:31:43.642 INFO:teuthology.orchestra.run.vm03.stdout: python3-pycparser noarch 2.20-6.el9 baseos 135 k 2026-03-10T08:31:43.642 INFO:teuthology.orchestra.run.vm03.stdout: python3-repoze-lru noarch 0.7-16.el9 epel 31 k 2026-03-10T08:31:43.642 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests noarch 2.25.1-10.el9 baseos 126 k 2026-03-10T08:31:43.643 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 appstream 54 k 2026-03-10T08:31:43.643 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes noarch 2.5.1-5.el9 epel 188 k 2026-03-10T08:31:43.643 INFO:teuthology.orchestra.run.vm03.stdout: python3-rsa noarch 4.9-2.el9 epel 59 k 2026-03-10T08:31:43.643 INFO:teuthology.orchestra.run.vm03.stdout: python3-scipy x86_64 1.9.3-2.el9 appstream 19 M 2026-03-10T08:31:43.643 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora noarch 5.0.0-2.el9 epel 36 k 2026-03-10T08:31:43.643 INFO:teuthology.orchestra.run.vm03.stdout: python3-toml noarch 0.10.2-6.el9 appstream 42 k 2026-03-10T08:31:43.643 INFO:teuthology.orchestra.run.vm03.stdout: python3-typing-extensions noarch 4.15.0-1.el9 epel 86 k 2026-03-10T08:31:43.643 INFO:teuthology.orchestra.run.vm03.stdout: python3-urllib3 noarch 1.26.5-7.el9 baseos 218 k 2026-03-10T08:31:43.643 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob noarch 1.8.8-2.el9 epel 230 k 2026-03-10T08:31:43.643 INFO:teuthology.orchestra.run.vm03.stdout: python3-websocket-client noarch 1.2.3-2.el9 epel 90 k 2026-03-10T08:31:43.643 INFO:teuthology.orchestra.run.vm03.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 epel 427 k 2026-03-10T08:31:43.643 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc-lockfile noarch 2.0-10.el9 epel 20 k 2026-03-10T08:31:43.643 INFO:teuthology.orchestra.run.vm03.stdout: qatlib x86_64 25.08.0-2.el9 appstream 240 k 2026-03-10T08:31:43.643 INFO:teuthology.orchestra.run.vm03.stdout: qatzip-libs x86_64 1.3.1-1.el9 appstream 66 k 2026-03-10T08:31:43.643 INFO:teuthology.orchestra.run.vm03.stdout: re2 x86_64 1:20211101-20.el9 epel 191 k 2026-03-10T08:31:43.643 INFO:teuthology.orchestra.run.vm03.stdout: socat x86_64 1.7.4.1-8.el9 appstream 303 k 2026-03-10T08:31:43.643 INFO:teuthology.orchestra.run.vm03.stdout: thrift x86_64 0.15.0-4.el9 epel 1.6 M 2026-03-10T08:31:43.643 INFO:teuthology.orchestra.run.vm03.stdout: unzip x86_64 6.0-59.el9 baseos 182 k 2026-03-10T08:31:43.643 INFO:teuthology.orchestra.run.vm03.stdout: xmlstarlet x86_64 1.6.1-20.el9 appstream 64 k 2026-03-10T08:31:43.643 INFO:teuthology.orchestra.run.vm03.stdout: zip x86_64 3.0-35.el9 baseos 266 k 2026-03-10T08:31:43.643 INFO:teuthology.orchestra.run.vm03.stdout:Installing weak dependencies: 2026-03-10T08:31:43.643 INFO:teuthology.orchestra.run.vm03.stdout: qatlib-service x86_64 25.08.0-2.el9 appstream 37 k 2026-03-10T08:31:43.643 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:31:43.643 INFO:teuthology.orchestra.run.vm03.stdout:Transaction Summary 2026-03-10T08:31:43.643 INFO:teuthology.orchestra.run.vm03.stdout:====================================================================================== 2026-03-10T08:31:43.643 INFO:teuthology.orchestra.run.vm03.stdout:Install 138 Packages 2026-03-10T08:31:43.643 INFO:teuthology.orchestra.run.vm03.stdout:Upgrade 2 Packages 2026-03-10T08:31:43.643 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:31:43.643 INFO:teuthology.orchestra.run.vm03.stdout:Total download size: 211 M 2026-03-10T08:31:43.643 INFO:teuthology.orchestra.run.vm03.stdout:Downloading Packages: 2026-03-10T08:31:44.907 INFO:teuthology.orchestra.run.vm06.stdout:Package librados2-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-10T08:31:44.908 INFO:teuthology.orchestra.run.vm06.stdout:Package librbd1-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-10T08:31:44.912 INFO:teuthology.orchestra.run.vm06.stdout:Package bzip2-1.0.8-11.el9.x86_64 is already installed. 2026-03-10T08:31:44.912 INFO:teuthology.orchestra.run.vm06.stdout:Package perl-Test-Harness-1:3.42-461.el9.noarch is already installed. 2026-03-10T08:31:44.941 INFO:teuthology.orchestra.run.vm06.stdout:Dependencies resolved. 2026-03-10T08:31:44.945 INFO:teuthology.orchestra.run.vm06.stdout:====================================================================================== 2026-03-10T08:31:44.945 INFO:teuthology.orchestra.run.vm06.stdout: Package Arch Version Repository Size 2026-03-10T08:31:44.945 INFO:teuthology.orchestra.run.vm06.stdout:====================================================================================== 2026-03-10T08:31:44.945 INFO:teuthology.orchestra.run.vm06.stdout:Installing: 2026-03-10T08:31:44.945 INFO:teuthology.orchestra.run.vm06.stdout: ceph x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 6.5 k 2026-03-10T08:31:44.945 INFO:teuthology.orchestra.run.vm06.stdout: ceph-base x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 5.5 M 2026-03-10T08:31:44.945 INFO:teuthology.orchestra.run.vm06.stdout: ceph-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.2 M 2026-03-10T08:31:44.945 INFO:teuthology.orchestra.run.vm06.stdout: ceph-immutable-object-cache x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 145 k 2026-03-10T08:31:44.945 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.1 M 2026-03-10T08:31:44.945 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-cephadm noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 150 k 2026-03-10T08:31:44.945 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-dashboard noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 3.8 M 2026-03-10T08:31:44.945 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-diskprediction-local noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 7.4 M 2026-03-10T08:31:44.945 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-rook noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 49 k 2026-03-10T08:31:44.945 INFO:teuthology.orchestra.run.vm06.stdout: ceph-radosgw x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 11 M 2026-03-10T08:31:44.946 INFO:teuthology.orchestra.run.vm06.stdout: ceph-test x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 50 M 2026-03-10T08:31:44.946 INFO:teuthology.orchestra.run.vm06.stdout: ceph-volume noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 299 k 2026-03-10T08:31:44.946 INFO:teuthology.orchestra.run.vm06.stdout: cephadm noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 769 k 2026-03-10T08:31:44.946 INFO:teuthology.orchestra.run.vm06.stdout: libcephfs-devel x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 34 k 2026-03-10T08:31:44.946 INFO:teuthology.orchestra.run.vm06.stdout: libcephfs2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.0 M 2026-03-10T08:31:44.946 INFO:teuthology.orchestra.run.vm06.stdout: librados-devel x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 127 k 2026-03-10T08:31:44.946 INFO:teuthology.orchestra.run.vm06.stdout: python3-cephfs x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 165 k 2026-03-10T08:31:44.946 INFO:teuthology.orchestra.run.vm06.stdout: python3-jmespath noarch 1.0.1-1.el9 appstream 48 k 2026-03-10T08:31:44.946 INFO:teuthology.orchestra.run.vm06.stdout: python3-pytest noarch 6.2.2-7.el9 appstream 519 k 2026-03-10T08:31:44.946 INFO:teuthology.orchestra.run.vm06.stdout: python3-rados x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 323 k 2026-03-10T08:31:44.946 INFO:teuthology.orchestra.run.vm06.stdout: python3-rbd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 303 k 2026-03-10T08:31:44.946 INFO:teuthology.orchestra.run.vm06.stdout: python3-rgw x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 100 k 2026-03-10T08:31:44.946 INFO:teuthology.orchestra.run.vm06.stdout: python3-xmltodict noarch 0.12.0-15.el9 epel 22 k 2026-03-10T08:31:44.946 INFO:teuthology.orchestra.run.vm06.stdout: rbd-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 85 k 2026-03-10T08:31:44.946 INFO:teuthology.orchestra.run.vm06.stdout: rbd-mirror x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.1 M 2026-03-10T08:31:44.946 INFO:teuthology.orchestra.run.vm06.stdout: rbd-nbd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 171 k 2026-03-10T08:31:44.946 INFO:teuthology.orchestra.run.vm06.stdout:Upgrading: 2026-03-10T08:31:44.946 INFO:teuthology.orchestra.run.vm06.stdout: librados2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.4 M 2026-03-10T08:31:44.946 INFO:teuthology.orchestra.run.vm06.stdout: librbd1 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.2 M 2026-03-10T08:31:44.946 INFO:teuthology.orchestra.run.vm06.stdout:Installing dependencies: 2026-03-10T08:31:44.946 INFO:teuthology.orchestra.run.vm06.stdout: abseil-cpp x86_64 20211102.0-4.el9 epel 551 k 2026-03-10T08:31:44.946 INFO:teuthology.orchestra.run.vm06.stdout: boost-program-options x86_64 1.75.0-13.el9 appstream 104 k 2026-03-10T08:31:44.946 INFO:teuthology.orchestra.run.vm06.stdout: ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 22 M 2026-03-10T08:31:44.946 INFO:teuthology.orchestra.run.vm06.stdout: ceph-grafana-dashboards noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 31 k 2026-03-10T08:31:44.946 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mds x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 2.4 M 2026-03-10T08:31:44.946 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-modules-core noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 253 k 2026-03-10T08:31:44.946 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mon x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 4.7 M 2026-03-10T08:31:44.946 INFO:teuthology.orchestra.run.vm06.stdout: ceph-osd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 17 M 2026-03-10T08:31:44.946 INFO:teuthology.orchestra.run.vm06.stdout: ceph-prometheus-alerts noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 17 k 2026-03-10T08:31:44.946 INFO:teuthology.orchestra.run.vm06.stdout: ceph-selinux x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 25 k 2026-03-10T08:31:44.946 INFO:teuthology.orchestra.run.vm06.stdout: cryptsetup x86_64 2.8.1-3.el9 baseos 351 k 2026-03-10T08:31:44.946 INFO:teuthology.orchestra.run.vm06.stdout: flexiblas x86_64 3.0.4-9.el9 appstream 30 k 2026-03-10T08:31:44.946 INFO:teuthology.orchestra.run.vm06.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 appstream 3.0 M 2026-03-10T08:31:44.946 INFO:teuthology.orchestra.run.vm06.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 appstream 15 k 2026-03-10T08:31:44.946 INFO:teuthology.orchestra.run.vm06.stdout: gperftools-libs x86_64 2.9.1-3.el9 epel 308 k 2026-03-10T08:31:44.946 INFO:teuthology.orchestra.run.vm06.stdout: grpc-data noarch 1.46.7-10.el9 epel 19 k 2026-03-10T08:31:44.946 INFO:teuthology.orchestra.run.vm06.stdout: ledmon-libs x86_64 1.1.0-3.el9 baseos 40 k 2026-03-10T08:31:44.946 INFO:teuthology.orchestra.run.vm06.stdout: libarrow x86_64 9.0.0-15.el9 epel 4.4 M 2026-03-10T08:31:44.946 INFO:teuthology.orchestra.run.vm06.stdout: libarrow-doc noarch 9.0.0-15.el9 epel 25 k 2026-03-10T08:31:44.946 INFO:teuthology.orchestra.run.vm06.stdout: libcephsqlite x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 163 k 2026-03-10T08:31:44.946 INFO:teuthology.orchestra.run.vm06.stdout: libconfig x86_64 1.7.2-9.el9 baseos 72 k 2026-03-10T08:31:44.946 INFO:teuthology.orchestra.run.vm06.stdout: libgfortran x86_64 11.5.0-14.el9 baseos 794 k 2026-03-10T08:31:44.946 INFO:teuthology.orchestra.run.vm06.stdout: libnbd x86_64 1.20.3-4.el9 appstream 164 k 2026-03-10T08:31:44.946 INFO:teuthology.orchestra.run.vm06.stdout: liboath x86_64 2.6.12-1.el9 epel 49 k 2026-03-10T08:31:44.946 INFO:teuthology.orchestra.run.vm06.stdout: libpmemobj x86_64 1.12.1-1.el9 appstream 160 k 2026-03-10T08:31:44.946 INFO:teuthology.orchestra.run.vm06.stdout: libquadmath x86_64 11.5.0-14.el9 baseos 184 k 2026-03-10T08:31:44.947 INFO:teuthology.orchestra.run.vm06.stdout: librabbitmq x86_64 0.11.0-7.el9 appstream 45 k 2026-03-10T08:31:44.947 INFO:teuthology.orchestra.run.vm06.stdout: libradosstriper1 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 503 k 2026-03-10T08:31:44.947 INFO:teuthology.orchestra.run.vm06.stdout: librdkafka x86_64 1.6.1-102.el9 appstream 662 k 2026-03-10T08:31:44.947 INFO:teuthology.orchestra.run.vm06.stdout: librgw2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 5.4 M 2026-03-10T08:31:44.947 INFO:teuthology.orchestra.run.vm06.stdout: libstoragemgmt x86_64 1.10.1-1.el9 appstream 246 k 2026-03-10T08:31:44.947 INFO:teuthology.orchestra.run.vm06.stdout: libunwind x86_64 1.6.2-1.el9 epel 67 k 2026-03-10T08:31:44.947 INFO:teuthology.orchestra.run.vm06.stdout: libxslt x86_64 1.1.34-12.el9 appstream 233 k 2026-03-10T08:31:44.947 INFO:teuthology.orchestra.run.vm06.stdout: lttng-ust x86_64 2.12.0-6.el9 appstream 292 k 2026-03-10T08:31:44.947 INFO:teuthology.orchestra.run.vm06.stdout: lua x86_64 5.4.4-4.el9 appstream 188 k 2026-03-10T08:31:44.947 INFO:teuthology.orchestra.run.vm06.stdout: lua-devel x86_64 5.4.4-4.el9 crb 22 k 2026-03-10T08:31:44.947 INFO:teuthology.orchestra.run.vm06.stdout: luarocks noarch 3.9.2-5.el9 epel 151 k 2026-03-10T08:31:44.947 INFO:teuthology.orchestra.run.vm06.stdout: mailcap noarch 2.1.49-5.el9 baseos 33 k 2026-03-10T08:31:44.947 INFO:teuthology.orchestra.run.vm06.stdout: openblas x86_64 0.3.29-1.el9 appstream 42 k 2026-03-10T08:31:44.947 INFO:teuthology.orchestra.run.vm06.stdout: openblas-openmp x86_64 0.3.29-1.el9 appstream 5.3 M 2026-03-10T08:31:44.947 INFO:teuthology.orchestra.run.vm06.stdout: parquet-libs x86_64 9.0.0-15.el9 epel 838 k 2026-03-10T08:31:44.947 INFO:teuthology.orchestra.run.vm06.stdout: pciutils x86_64 3.7.0-7.el9 baseos 93 k 2026-03-10T08:31:44.947 INFO:teuthology.orchestra.run.vm06.stdout: protobuf x86_64 3.14.0-17.el9 appstream 1.0 M 2026-03-10T08:31:44.947 INFO:teuthology.orchestra.run.vm06.stdout: protobuf-compiler x86_64 3.14.0-17.el9 crb 862 k 2026-03-10T08:31:44.947 INFO:teuthology.orchestra.run.vm06.stdout: python3-asyncssh noarch 2.13.2-5.el9 epel 548 k 2026-03-10T08:31:44.947 INFO:teuthology.orchestra.run.vm06.stdout: python3-autocommand noarch 2.2.2-8.el9 epel 29 k 2026-03-10T08:31:44.947 INFO:teuthology.orchestra.run.vm06.stdout: python3-babel noarch 2.9.1-2.el9 appstream 6.0 M 2026-03-10T08:31:44.947 INFO:teuthology.orchestra.run.vm06.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 epel 60 k 2026-03-10T08:31:44.947 INFO:teuthology.orchestra.run.vm06.stdout: python3-bcrypt x86_64 3.2.2-1.el9 epel 43 k 2026-03-10T08:31:44.947 INFO:teuthology.orchestra.run.vm06.stdout: python3-cachetools noarch 4.2.4-1.el9 epel 32 k 2026-03-10T08:31:44.947 INFO:teuthology.orchestra.run.vm06.stdout: python3-ceph-argparse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 45 k 2026-03-10T08:31:44.947 INFO:teuthology.orchestra.run.vm06.stdout: python3-ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 142 k 2026-03-10T08:31:44.947 INFO:teuthology.orchestra.run.vm06.stdout: python3-certifi noarch 2023.05.07-4.el9 epel 14 k 2026-03-10T08:31:44.947 INFO:teuthology.orchestra.run.vm06.stdout: python3-cffi x86_64 1.14.5-5.el9 baseos 253 k 2026-03-10T08:31:44.947 INFO:teuthology.orchestra.run.vm06.stdout: python3-cheroot noarch 10.0.1-4.el9 epel 173 k 2026-03-10T08:31:44.947 INFO:teuthology.orchestra.run.vm06.stdout: python3-cherrypy noarch 18.6.1-2.el9 epel 358 k 2026-03-10T08:31:44.947 INFO:teuthology.orchestra.run.vm06.stdout: python3-cryptography x86_64 36.0.1-5.el9 baseos 1.2 M 2026-03-10T08:31:44.947 INFO:teuthology.orchestra.run.vm06.stdout: python3-devel x86_64 3.9.25-3.el9 appstream 244 k 2026-03-10T08:31:44.947 INFO:teuthology.orchestra.run.vm06.stdout: python3-google-auth noarch 1:2.45.0-1.el9 epel 254 k 2026-03-10T08:31:44.947 INFO:teuthology.orchestra.run.vm06.stdout: python3-grpcio x86_64 1.46.7-10.el9 epel 2.0 M 2026-03-10T08:31:44.947 INFO:teuthology.orchestra.run.vm06.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 epel 144 k 2026-03-10T08:31:44.947 INFO:teuthology.orchestra.run.vm06.stdout: python3-iniconfig noarch 1.1.1-7.el9 appstream 17 k 2026-03-10T08:31:44.947 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco noarch 8.2.1-3.el9 epel 11 k 2026-03-10T08:31:44.947 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 epel 18 k 2026-03-10T08:31:44.947 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 epel 23 k 2026-03-10T08:31:44.947 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco-context noarch 6.0.1-3.el9 epel 20 k 2026-03-10T08:31:44.947 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 epel 19 k 2026-03-10T08:31:44.947 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco-text noarch 4.0.0-2.el9 epel 26 k 2026-03-10T08:31:44.947 INFO:teuthology.orchestra.run.vm06.stdout: python3-jinja2 noarch 2.11.3-8.el9 appstream 249 k 2026-03-10T08:31:44.947 INFO:teuthology.orchestra.run.vm06.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 epel 1.0 M 2026-03-10T08:31:44.947 INFO:teuthology.orchestra.run.vm06.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 appstream 177 k 2026-03-10T08:31:44.947 INFO:teuthology.orchestra.run.vm06.stdout: python3-logutils noarch 0.3.5-21.el9 epel 46 k 2026-03-10T08:31:44.947 INFO:teuthology.orchestra.run.vm06.stdout: python3-mako noarch 1.1.4-6.el9 appstream 172 k 2026-03-10T08:31:44.947 INFO:teuthology.orchestra.run.vm06.stdout: python3-markupsafe x86_64 1.1.1-12.el9 appstream 35 k 2026-03-10T08:31:44.947 INFO:teuthology.orchestra.run.vm06.stdout: python3-more-itertools noarch 8.12.0-2.el9 epel 79 k 2026-03-10T08:31:44.947 INFO:teuthology.orchestra.run.vm06.stdout: python3-natsort noarch 7.1.1-5.el9 epel 58 k 2026-03-10T08:31:44.947 INFO:teuthology.orchestra.run.vm06.stdout: python3-numpy x86_64 1:1.23.5-2.el9 appstream 6.1 M 2026-03-10T08:31:44.947 INFO:teuthology.orchestra.run.vm06.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 appstream 442 k 2026-03-10T08:31:44.947 INFO:teuthology.orchestra.run.vm06.stdout: python3-packaging noarch 20.9-5.el9 appstream 77 k 2026-03-10T08:31:44.947 INFO:teuthology.orchestra.run.vm06.stdout: python3-pecan noarch 1.4.2-3.el9 epel 272 k 2026-03-10T08:31:44.947 INFO:teuthology.orchestra.run.vm06.stdout: python3-pluggy noarch 0.13.1-7.el9 appstream 41 k 2026-03-10T08:31:44.947 INFO:teuthology.orchestra.run.vm06.stdout: python3-ply noarch 3.11-14.el9 baseos 106 k 2026-03-10T08:31:44.947 INFO:teuthology.orchestra.run.vm06.stdout: python3-portend noarch 3.1.0-2.el9 epel 16 k 2026-03-10T08:31:44.947 INFO:teuthology.orchestra.run.vm06.stdout: python3-protobuf noarch 3.14.0-17.el9 appstream 267 k 2026-03-10T08:31:44.947 INFO:teuthology.orchestra.run.vm06.stdout: python3-py noarch 1.10.0-6.el9 appstream 477 k 2026-03-10T08:31:44.947 INFO:teuthology.orchestra.run.vm06.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 epel 90 k 2026-03-10T08:31:44.948 INFO:teuthology.orchestra.run.vm06.stdout: python3-pyasn1 noarch 0.4.8-7.el9 appstream 157 k 2026-03-10T08:31:44.948 INFO:teuthology.orchestra.run.vm06.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 appstream 277 k 2026-03-10T08:31:44.948 INFO:teuthology.orchestra.run.vm06.stdout: python3-pycparser noarch 2.20-6.el9 baseos 135 k 2026-03-10T08:31:44.948 INFO:teuthology.orchestra.run.vm06.stdout: python3-repoze-lru noarch 0.7-16.el9 epel 31 k 2026-03-10T08:31:44.948 INFO:teuthology.orchestra.run.vm06.stdout: python3-requests noarch 2.25.1-10.el9 baseos 126 k 2026-03-10T08:31:44.948 INFO:teuthology.orchestra.run.vm06.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 appstream 54 k 2026-03-10T08:31:44.948 INFO:teuthology.orchestra.run.vm06.stdout: python3-routes noarch 2.5.1-5.el9 epel 188 k 2026-03-10T08:31:44.948 INFO:teuthology.orchestra.run.vm06.stdout: python3-rsa noarch 4.9-2.el9 epel 59 k 2026-03-10T08:31:44.948 INFO:teuthology.orchestra.run.vm06.stdout: python3-scipy x86_64 1.9.3-2.el9 appstream 19 M 2026-03-10T08:31:44.948 INFO:teuthology.orchestra.run.vm06.stdout: python3-tempora noarch 5.0.0-2.el9 epel 36 k 2026-03-10T08:31:44.948 INFO:teuthology.orchestra.run.vm06.stdout: python3-toml noarch 0.10.2-6.el9 appstream 42 k 2026-03-10T08:31:44.948 INFO:teuthology.orchestra.run.vm06.stdout: python3-typing-extensions noarch 4.15.0-1.el9 epel 86 k 2026-03-10T08:31:44.948 INFO:teuthology.orchestra.run.vm06.stdout: python3-urllib3 noarch 1.26.5-7.el9 baseos 218 k 2026-03-10T08:31:44.948 INFO:teuthology.orchestra.run.vm06.stdout: python3-webob noarch 1.8.8-2.el9 epel 230 k 2026-03-10T08:31:44.948 INFO:teuthology.orchestra.run.vm06.stdout: python3-websocket-client noarch 1.2.3-2.el9 epel 90 k 2026-03-10T08:31:44.948 INFO:teuthology.orchestra.run.vm06.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 epel 427 k 2026-03-10T08:31:44.948 INFO:teuthology.orchestra.run.vm06.stdout: python3-zc-lockfile noarch 2.0-10.el9 epel 20 k 2026-03-10T08:31:44.948 INFO:teuthology.orchestra.run.vm06.stdout: qatlib x86_64 25.08.0-2.el9 appstream 240 k 2026-03-10T08:31:44.948 INFO:teuthology.orchestra.run.vm06.stdout: qatzip-libs x86_64 1.3.1-1.el9 appstream 66 k 2026-03-10T08:31:44.948 INFO:teuthology.orchestra.run.vm06.stdout: re2 x86_64 1:20211101-20.el9 epel 191 k 2026-03-10T08:31:44.948 INFO:teuthology.orchestra.run.vm06.stdout: socat x86_64 1.7.4.1-8.el9 appstream 303 k 2026-03-10T08:31:44.948 INFO:teuthology.orchestra.run.vm06.stdout: thrift x86_64 0.15.0-4.el9 epel 1.6 M 2026-03-10T08:31:44.948 INFO:teuthology.orchestra.run.vm06.stdout: unzip x86_64 6.0-59.el9 baseos 182 k 2026-03-10T08:31:44.948 INFO:teuthology.orchestra.run.vm06.stdout: xmlstarlet x86_64 1.6.1-20.el9 appstream 64 k 2026-03-10T08:31:44.948 INFO:teuthology.orchestra.run.vm06.stdout: zip x86_64 3.0-35.el9 baseos 266 k 2026-03-10T08:31:44.948 INFO:teuthology.orchestra.run.vm06.stdout:Installing weak dependencies: 2026-03-10T08:31:44.948 INFO:teuthology.orchestra.run.vm06.stdout: qatlib-service x86_64 25.08.0-2.el9 appstream 37 k 2026-03-10T08:31:44.948 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:31:44.948 INFO:teuthology.orchestra.run.vm06.stdout:Transaction Summary 2026-03-10T08:31:44.948 INFO:teuthology.orchestra.run.vm06.stdout:====================================================================================== 2026-03-10T08:31:44.948 INFO:teuthology.orchestra.run.vm06.stdout:Install 138 Packages 2026-03-10T08:31:44.948 INFO:teuthology.orchestra.run.vm06.stdout:Upgrade 2 Packages 2026-03-10T08:31:44.948 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:31:44.948 INFO:teuthology.orchestra.run.vm06.stdout:Total download size: 211 M 2026-03-10T08:31:44.948 INFO:teuthology.orchestra.run.vm06.stdout:Downloading Packages: 2026-03-10T08:31:45.406 INFO:teuthology.orchestra.run.vm03.stdout:(1/140): ceph-19.2.3-678.ge911bdeb.el9.x86_64.r 14 kB/s | 6.5 kB 00:00 2026-03-10T08:31:46.223 INFO:teuthology.orchestra.run.vm03.stdout:(2/140): ceph-fuse-19.2.3-678.ge911bdeb.el9.x86 1.4 MB/s | 1.2 MB 00:00 2026-03-10T08:31:46.368 INFO:teuthology.orchestra.run.vm03.stdout:(3/140): ceph-immutable-object-cache-19.2.3-678 1.0 MB/s | 145 kB 00:00 2026-03-10T08:31:46.531 INFO:teuthology.orchestra.run.vm03.stdout:(4/140): ceph-base-19.2.3-678.ge911bdeb.el9.x86 3.5 MB/s | 5.5 MB 00:01 2026-03-10T08:31:46.603 INFO:teuthology.orchestra.run.vm06.stdout:(1/140): ceph-19.2.3-678.ge911bdeb.el9.x86_64.r 13 kB/s | 6.5 kB 00:00 2026-03-10T08:31:46.656 INFO:teuthology.orchestra.run.vm03.stdout:(5/140): ceph-mgr-19.2.3-678.ge911bdeb.el9.x86_ 8.6 MB/s | 1.1 MB 00:00 2026-03-10T08:31:46.740 INFO:teuthology.orchestra.run.vm03.stdout:(6/140): ceph-mds-19.2.3-678.ge911bdeb.el9.x86_ 6.5 MB/s | 2.4 MB 00:00 2026-03-10T08:31:47.128 INFO:teuthology.orchestra.run.vm03.stdout:(7/140): ceph-mon-19.2.3-678.ge911bdeb.el9.x86_ 10 MB/s | 4.7 MB 00:00 2026-03-10T08:31:47.402 INFO:teuthology.orchestra.run.vm06.stdout:(2/140): ceph-fuse-19.2.3-678.ge911bdeb.el9.x86 1.4 MB/s | 1.2 MB 00:00 2026-03-10T08:31:47.526 INFO:teuthology.orchestra.run.vm06.stdout:(3/140): ceph-immutable-object-cache-19.2.3-678 1.1 MB/s | 145 kB 00:00 2026-03-10T08:31:47.942 INFO:teuthology.orchestra.run.vm06.stdout:(4/140): ceph-base-19.2.3-678.ge911bdeb.el9.x86 3.0 MB/s | 5.5 MB 00:01 2026-03-10T08:31:47.982 INFO:teuthology.orchestra.run.vm06.stdout:(5/140): ceph-mds-19.2.3-678.ge911bdeb.el9.x86_ 5.3 MB/s | 2.4 MB 00:00 2026-03-10T08:31:47.982 INFO:teuthology.orchestra.run.vm03.stdout:(8/140): ceph-common-19.2.3-678.ge911bdeb.el9.x 7.1 MB/s | 22 MB 00:03 2026-03-10T08:31:48.051 INFO:teuthology.orchestra.run.vm03.stdout:(9/140): ceph-radosgw-19.2.3-678.ge911bdeb.el9. 12 MB/s | 11 MB 00:00 2026-03-10T08:31:48.084 INFO:teuthology.orchestra.run.vm06.stdout:(6/140): ceph-mgr-19.2.3-678.ge911bdeb.el9.x86_ 7.6 MB/s | 1.1 MB 00:00 2026-03-10T08:31:48.119 INFO:teuthology.orchestra.run.vm03.stdout:(10/140): ceph-selinux-19.2.3-678.ge911bdeb.el9 184 kB/s | 25 kB 00:00 2026-03-10T08:31:48.273 INFO:teuthology.orchestra.run.vm03.stdout:(11/140): libcephfs-devel-19.2.3-678.ge911bdeb. 217 kB/s | 34 kB 00:00 2026-03-10T08:31:48.323 INFO:teuthology.orchestra.run.vm03.stdout:(12/140): ceph-osd-19.2.3-678.ge911bdeb.el9.x86 11 MB/s | 17 MB 00:01 2026-03-10T08:31:48.403 INFO:teuthology.orchestra.run.vm03.stdout:(13/140): libcephfs2-19.2.3-678.ge911bdeb.el9.x 7.6 MB/s | 1.0 MB 00:00 2026-03-10T08:31:48.453 INFO:teuthology.orchestra.run.vm03.stdout:(14/140): libcephsqlite-19.2.3-678.ge911bdeb.el 1.2 MB/s | 163 kB 00:00 2026-03-10T08:31:48.508 INFO:teuthology.orchestra.run.vm06.stdout:(7/140): ceph-mon-19.2.3-678.ge911bdeb.el9.x86_ 9.0 MB/s | 4.7 MB 00:00 2026-03-10T08:31:48.522 INFO:teuthology.orchestra.run.vm03.stdout:(15/140): librados-devel-19.2.3-678.ge911bdeb.e 1.0 MB/s | 127 kB 00:00 2026-03-10T08:31:48.580 INFO:teuthology.orchestra.run.vm03.stdout:(16/140): libradosstriper1-19.2.3-678.ge911bdeb 3.9 MB/s | 503 kB 00:00 2026-03-10T08:31:48.739 INFO:teuthology.orchestra.run.vm03.stdout:(17/140): python3-ceph-argparse-19.2.3-678.ge91 283 kB/s | 45 kB 00:00 2026-03-10T08:31:48.892 INFO:teuthology.orchestra.run.vm03.stdout:(18/140): python3-ceph-common-19.2.3-678.ge911b 930 kB/s | 142 kB 00:00 2026-03-10T08:31:49.011 INFO:teuthology.orchestra.run.vm03.stdout:(19/140): librgw2-19.2.3-678.ge911bdeb.el9.x86_ 11 MB/s | 5.4 MB 00:00 2026-03-10T08:31:49.041 INFO:teuthology.orchestra.run.vm03.stdout:(20/140): python3-cephfs-19.2.3-678.ge911bdeb.e 1.1 MB/s | 165 kB 00:00 2026-03-10T08:31:49.148 INFO:teuthology.orchestra.run.vm03.stdout:(21/140): python3-rados-19.2.3-678.ge911bdeb.el 2.3 MB/s | 323 kB 00:00 2026-03-10T08:31:49.176 INFO:teuthology.orchestra.run.vm03.stdout:(22/140): python3-rbd-19.2.3-678.ge911bdeb.el9. 2.2 MB/s | 303 kB 00:00 2026-03-10T08:31:49.272 INFO:teuthology.orchestra.run.vm03.stdout:(23/140): python3-rgw-19.2.3-678.ge911bdeb.el9. 803 kB/s | 100 kB 00:00 2026-03-10T08:31:49.274 INFO:teuthology.orchestra.run.vm06.stdout:(8/140): ceph-common-19.2.3-678.ge911bdeb.el9.x 6.9 MB/s | 22 MB 00:03 2026-03-10T08:31:49.293 INFO:teuthology.orchestra.run.vm03.stdout:(24/140): rbd-fuse-19.2.3-678.ge911bdeb.el9.x86 728 kB/s | 85 kB 00:00 2026-03-10T08:31:49.388 INFO:teuthology.orchestra.run.vm06.stdout:(9/140): ceph-selinux-19.2.3-678.ge911bdeb.el9. 220 kB/s | 25 kB 00:00 2026-03-10T08:31:49.438 INFO:teuthology.orchestra.run.vm03.stdout:(25/140): rbd-nbd-19.2.3-678.ge911bdeb.el9.x86_ 1.2 MB/s | 171 kB 00:00 2026-03-10T08:31:49.505 INFO:teuthology.orchestra.run.vm06.stdout:(10/140): ceph-radosgw-19.2.3-678.ge911bdeb.el9 11 MB/s | 11 MB 00:00 2026-03-10T08:31:49.579 INFO:teuthology.orchestra.run.vm03.stdout:(26/140): rbd-mirror-19.2.3-678.ge911bdeb.el9.x 10 MB/s | 3.1 MB 00:00 2026-03-10T08:31:49.586 INFO:teuthology.orchestra.run.vm03.stdout:(27/140): ceph-grafana-dashboards-19.2.3-678.ge 210 kB/s | 31 kB 00:00 2026-03-10T08:31:49.644 INFO:teuthology.orchestra.run.vm06.stdout:(11/140): ceph-osd-19.2.3-678.ge911bdeb.el9.x86 11 MB/s | 17 MB 00:01 2026-03-10T08:31:49.645 INFO:teuthology.orchestra.run.vm06.stdout:(12/140): libcephfs-devel-19.2.3-678.ge911bdeb. 240 kB/s | 34 kB 00:00 2026-03-10T08:31:49.698 INFO:teuthology.orchestra.run.vm03.stdout:(28/140): ceph-mgr-cephadm-19.2.3-678.ge911bdeb 1.2 MB/s | 150 kB 00:00 2026-03-10T08:31:49.802 INFO:teuthology.orchestra.run.vm06.stdout:(13/140): libcephsqlite-19.2.3-678.ge911bdeb.el 1.0 MB/s | 163 kB 00:00 2026-03-10T08:31:49.804 INFO:teuthology.orchestra.run.vm06.stdout:(14/140): libcephfs2-19.2.3-678.ge911bdeb.el9.x 6.1 MB/s | 1.0 MB 00:00 2026-03-10T08:31:49.899 INFO:teuthology.orchestra.run.vm03.stdout:(29/140): ceph-mgr-dashboard-19.2.3-678.ge911bd 12 MB/s | 3.8 MB 00:00 2026-03-10T08:31:49.938 INFO:teuthology.orchestra.run.vm06.stdout:(15/140): librados-devel-19.2.3-678.ge911bdeb.e 927 kB/s | 127 kB 00:00 2026-03-10T08:31:49.957 INFO:teuthology.orchestra.run.vm06.stdout:(16/140): libradosstriper1-19.2.3-678.ge911bdeb 3.2 MB/s | 503 kB 00:00 2026-03-10T08:31:50.022 INFO:teuthology.orchestra.run.vm03.stdout:(30/140): ceph-mgr-modules-core-19.2.3-678.ge91 2.0 MB/s | 253 kB 00:00 2026-03-10T08:31:50.100 INFO:teuthology.orchestra.run.vm06.stdout:(17/140): python3-ceph-argparse-19.2.3-678.ge91 315 kB/s | 45 kB 00:00 2026-03-10T08:31:50.162 INFO:teuthology.orchestra.run.vm03.stdout:(31/140): ceph-mgr-rook-19.2.3-678.ge911bdeb.el 352 kB/s | 49 kB 00:00 2026-03-10T08:31:50.241 INFO:teuthology.orchestra.run.vm06.stdout:(18/140): python3-ceph-common-19.2.3-678.ge911b 1.0 MB/s | 142 kB 00:00 2026-03-10T08:31:50.283 INFO:teuthology.orchestra.run.vm03.stdout:(32/140): ceph-mgr-diskprediction-local-19.2.3- 13 MB/s | 7.4 MB 00:00 2026-03-10T08:31:50.295 INFO:teuthology.orchestra.run.vm03.stdout:(33/140): ceph-prometheus-alerts-19.2.3-678.ge9 126 kB/s | 17 kB 00:00 2026-03-10T08:31:50.362 INFO:teuthology.orchestra.run.vm06.stdout:(19/140): librgw2-19.2.3-678.ge911bdeb.el9.x86_ 13 MB/s | 5.4 MB 00:00 2026-03-10T08:31:50.363 INFO:teuthology.orchestra.run.vm06.stdout:(20/140): python3-cephfs-19.2.3-678.ge911bdeb.e 1.3 MB/s | 165 kB 00:00 2026-03-10T08:31:50.417 INFO:teuthology.orchestra.run.vm03.stdout:(34/140): ceph-volume-19.2.3-678.ge911bdeb.el9. 2.2 MB/s | 299 kB 00:00 2026-03-10T08:31:50.444 INFO:teuthology.orchestra.run.vm03.stdout:(35/140): cephadm-19.2.3-678.ge911bdeb.el9.noar 5.0 MB/s | 769 kB 00:00 2026-03-10T08:31:50.481 INFO:teuthology.orchestra.run.vm06.stdout:(21/140): python3-rados-19.2.3-678.ge911bdeb.el 2.7 MB/s | 323 kB 00:00 2026-03-10T08:31:50.484 INFO:teuthology.orchestra.run.vm06.stdout:(22/140): python3-rbd-19.2.3-678.ge911bdeb.el9. 2.5 MB/s | 303 kB 00:00 2026-03-10T08:31:50.563 INFO:teuthology.orchestra.run.vm03.stdout:(36/140): ledmon-libs-1.1.0-3.el9.x86_64.rpm 343 kB/s | 40 kB 00:00 2026-03-10T08:31:50.595 INFO:teuthology.orchestra.run.vm06.stdout:(23/140): python3-rgw-19.2.3-678.ge911bdeb.el9. 871 kB/s | 100 kB 00:00 2026-03-10T08:31:50.602 INFO:teuthology.orchestra.run.vm06.stdout:(24/140): rbd-fuse-19.2.3-678.ge911bdeb.el9.x86 722 kB/s | 85 kB 00:00 2026-03-10T08:31:50.622 INFO:teuthology.orchestra.run.vm03.stdout:(37/140): libconfig-1.7.2-9.el9.x86_64.rpm 1.2 MB/s | 72 kB 00:00 2026-03-10T08:31:50.649 INFO:teuthology.orchestra.run.vm03.stdout:(38/140): cryptsetup-2.8.1-3.el9.x86_64.rpm 1.5 MB/s | 351 kB 00:00 2026-03-10T08:31:50.682 INFO:teuthology.orchestra.run.vm03.stdout:(39/140): libquadmath-11.5.0-14.el9.x86_64.rpm 5.6 MB/s | 184 kB 00:00 2026-03-10T08:31:50.717 INFO:teuthology.orchestra.run.vm03.stdout:(40/140): mailcap-2.1.49-5.el9.noarch.rpm 952 kB/s | 33 kB 00:00 2026-03-10T08:31:50.815 INFO:teuthology.orchestra.run.vm03.stdout:(41/140): pciutils-3.7.0-7.el9.x86_64.rpm 954 kB/s | 93 kB 00:00 2026-03-10T08:31:50.815 INFO:teuthology.orchestra.run.vm06.stdout:(25/140): rbd-nbd-19.2.3-678.ge911bdeb.el9.x86_ 805 kB/s | 171 kB 00:00 2026-03-10T08:31:50.818 INFO:teuthology.orchestra.run.vm03.stdout:(42/140): libgfortran-11.5.0-14.el9.x86_64.rpm 4.0 MB/s | 794 kB 00:00 2026-03-10T08:31:50.849 INFO:teuthology.orchestra.run.vm03.stdout:(43/140): python3-cffi-1.14.5-5.el9.x86_64.rpm 7.3 MB/s | 253 kB 00:00 2026-03-10T08:31:50.882 INFO:teuthology.orchestra.run.vm03.stdout:(44/140): python3-ply-3.11-14.el9.noarch.rpm 3.3 MB/s | 106 kB 00:00 2026-03-10T08:31:50.914 INFO:teuthology.orchestra.run.vm03.stdout:(45/140): python3-cryptography-36.0.1-5.el9.x86 13 MB/s | 1.2 MB 00:00 2026-03-10T08:31:50.916 INFO:teuthology.orchestra.run.vm03.stdout:(46/140): python3-pycparser-2.20-6.el9.noarch.r 3.9 MB/s | 135 kB 00:00 2026-03-10T08:31:50.936 INFO:teuthology.orchestra.run.vm06.stdout:(26/140): ceph-grafana-dashboards-19.2.3-678.ge 259 kB/s | 31 kB 00:00 2026-03-10T08:31:50.947 INFO:teuthology.orchestra.run.vm03.stdout:(47/140): python3-requests-2.25.1-10.el9.noarch 3.9 MB/s | 126 kB 00:00 2026-03-10T08:31:50.951 INFO:teuthology.orchestra.run.vm03.stdout:(48/140): python3-urllib3-1.26.5-7.el9.noarch.r 6.0 MB/s | 218 kB 00:00 2026-03-10T08:31:50.956 INFO:teuthology.orchestra.run.vm06.stdout:(27/140): rbd-mirror-19.2.3-678.ge911bdeb.el9.x 8.6 MB/s | 3.1 MB 00:00 2026-03-10T08:31:50.980 INFO:teuthology.orchestra.run.vm03.stdout:(49/140): unzip-6.0-59.el9.x86_64.rpm 5.4 MB/s | 182 kB 00:00 2026-03-10T08:31:50.984 INFO:teuthology.orchestra.run.vm03.stdout:(50/140): zip-3.0-35.el9.x86_64.rpm 7.9 MB/s | 266 kB 00:00 2026-03-10T08:31:51.054 INFO:teuthology.orchestra.run.vm06.stdout:(28/140): ceph-mgr-cephadm-19.2.3-678.ge911bdeb 1.2 MB/s | 150 kB 00:00 2026-03-10T08:31:51.227 INFO:teuthology.orchestra.run.vm03.stdout:(51/140): flexiblas-3.0.4-9.el9.x86_64.rpm 122 kB/s | 30 kB 00:00 2026-03-10T08:31:51.321 INFO:teuthology.orchestra.run.vm03.stdout:(52/140): boost-program-options-1.75.0-13.el9.x 305 kB/s | 104 kB 00:00 2026-03-10T08:31:51.354 INFO:teuthology.orchestra.run.vm06.stdout:(29/140): ceph-mgr-dashboard-19.2.3-678.ge911bd 9.6 MB/s | 3.8 MB 00:00 2026-03-10T08:31:51.528 INFO:teuthology.orchestra.run.vm03.stdout:(53/140): ceph-test-19.2.3-678.ge911bdeb.el9.x8 14 MB/s | 50 MB 00:03 2026-03-10T08:31:51.553 INFO:teuthology.orchestra.run.vm06.stdout:(30/140): ceph-mgr-modules-core-19.2.3-678.ge91 1.3 MB/s | 253 kB 00:00 2026-03-10T08:31:51.560 INFO:teuthology.orchestra.run.vm03.stdout:(54/140): flexiblas-openblas-openmp-3.0.4-9.el9 62 kB/s | 15 kB 00:00 2026-03-10T08:31:51.577 INFO:teuthology.orchestra.run.vm06.stdout:(31/140): ceph-mgr-diskprediction-local-19.2.3- 14 MB/s | 7.4 MB 00:00 2026-03-10T08:31:51.638 INFO:teuthology.orchestra.run.vm03.stdout:(55/140): flexiblas-netlib-3.0.4-9.el9.x86_64.r 7.3 MB/s | 3.0 MB 00:00 2026-03-10T08:31:51.655 INFO:teuthology.orchestra.run.vm03.stdout:(56/140): libnbd-1.20.3-4.el9.x86_64.rpm 1.3 MB/s | 164 kB 00:00 2026-03-10T08:31:51.656 INFO:teuthology.orchestra.run.vm03.stdout:(57/140): libpmemobj-1.12.1-1.el9.x86_64.rpm 1.7 MB/s | 160 kB 00:00 2026-03-10T08:31:51.665 INFO:teuthology.orchestra.run.vm06.stdout:(32/140): ceph-mgr-rook-19.2.3-678.ge911bdeb.el 431 kB/s | 49 kB 00:00 2026-03-10T08:31:51.685 INFO:teuthology.orchestra.run.vm03.stdout:(58/140): librabbitmq-0.11.0-7.el9.x86_64.rpm 949 kB/s | 45 kB 00:00 2026-03-10T08:31:51.697 INFO:teuthology.orchestra.run.vm06.stdout:(33/140): ceph-prometheus-alerts-19.2.3-678.ge9 140 kB/s | 17 kB 00:00 2026-03-10T08:31:51.706 INFO:teuthology.orchestra.run.vm03.stdout:(59/140): libstoragemgmt-1.10.1-1.el9.x86_64.rp 4.8 MB/s | 246 kB 00:00 2026-03-10T08:31:51.782 INFO:teuthology.orchestra.run.vm06.stdout:(34/140): ceph-volume-19.2.3-678.ge911bdeb.el9. 2.5 MB/s | 299 kB 00:00 2026-03-10T08:31:51.799 INFO:teuthology.orchestra.run.vm03.stdout:(60/140): libxslt-1.1.34-12.el9.x86_64.rpm 2.0 MB/s | 233 kB 00:00 2026-03-10T08:31:51.802 INFO:teuthology.orchestra.run.vm03.stdout:(61/140): librdkafka-1.6.1-102.el9.x86_64.rpm 4.4 MB/s | 662 kB 00:00 2026-03-10T08:31:51.803 INFO:teuthology.orchestra.run.vm03.stdout:(62/140): lttng-ust-2.12.0-6.el9.x86_64.rpm 2.9 MB/s | 292 kB 00:00 2026-03-10T08:31:51.828 INFO:teuthology.orchestra.run.vm06.stdout:(35/140): cephadm-19.2.3-678.ge911bdeb.el9.noar 5.7 MB/s | 769 kB 00:00 2026-03-10T08:31:51.850 INFO:teuthology.orchestra.run.vm03.stdout:(63/140): lua-5.4.4-4.el9.x86_64.rpm 3.7 MB/s | 188 kB 00:00 2026-03-10T08:31:51.851 INFO:teuthology.orchestra.run.vm03.stdout:(64/140): openblas-0.3.29-1.el9.x86_64.rpm 888 kB/s | 42 kB 00:00 2026-03-10T08:31:51.896 INFO:teuthology.orchestra.run.vm06.stdout:(36/140): ledmon-libs-1.1.0-3.el9.x86_64.rpm 601 kB/s | 40 kB 00:00 2026-03-10T08:31:51.935 INFO:teuthology.orchestra.run.vm06.stdout:(37/140): cryptsetup-2.8.1-3.el9.x86_64.rpm 2.2 MB/s | 351 kB 00:00 2026-03-10T08:31:51.993 INFO:teuthology.orchestra.run.vm06.stdout:(38/140): libconfig-1.7.2-9.el9.x86_64.rpm 742 kB/s | 72 kB 00:00 2026-03-10T08:31:52.088 INFO:teuthology.orchestra.run.vm03.stdout:(65/140): protobuf-3.14.0-17.el9.x86_64.rpm 4.2 MB/s | 1.0 MB 00:00 2026-03-10T08:31:52.129 INFO:teuthology.orchestra.run.vm06.stdout:(39/140): libquadmath-11.5.0-14.el9.x86_64.rpm 1.3 MB/s | 184 kB 00:00 2026-03-10T08:31:52.147 INFO:teuthology.orchestra.run.vm06.stdout:(40/140): mailcap-2.1.49-5.el9.noarch.rpm 1.8 MB/s | 33 kB 00:00 2026-03-10T08:31:52.169 INFO:teuthology.orchestra.run.vm06.stdout:(41/140): pciutils-3.7.0-7.el9.x86_64.rpm 4.1 MB/s | 93 kB 00:00 2026-03-10T08:31:52.204 INFO:teuthology.orchestra.run.vm06.stdout:(42/140): libgfortran-11.5.0-14.el9.x86_64.rpm 2.9 MB/s | 794 kB 00:00 2026-03-10T08:31:52.219 INFO:teuthology.orchestra.run.vm06.stdout:(43/140): python3-cffi-1.14.5-5.el9.x86_64.rpm 5.0 MB/s | 253 kB 00:00 2026-03-10T08:31:52.255 INFO:teuthology.orchestra.run.vm06.stdout:(44/140): python3-ply-3.11-14.el9.noarch.rpm 3.0 MB/s | 106 kB 00:00 2026-03-10T08:31:52.315 INFO:teuthology.orchestra.run.vm06.stdout:(45/140): python3-pycparser-2.20-6.el9.noarch.r 2.2 MB/s | 135 kB 00:00 2026-03-10T08:31:52.331 INFO:teuthology.orchestra.run.vm06.stdout:(46/140): python3-cryptography-36.0.1-5.el9.x86 9.8 MB/s | 1.2 MB 00:00 2026-03-10T08:31:52.335 INFO:teuthology.orchestra.run.vm06.stdout:(47/140): python3-requests-2.25.1-10.el9.noarch 6.0 MB/s | 126 kB 00:00 2026-03-10T08:31:52.376 INFO:teuthology.orchestra.run.vm06.stdout:(48/140): unzip-6.0-59.el9.x86_64.rpm 4.4 MB/s | 182 kB 00:00 2026-03-10T08:31:52.479 INFO:teuthology.orchestra.run.vm06.stdout:(49/140): zip-3.0-35.el9.x86_64.rpm 2.5 MB/s | 266 kB 00:00 2026-03-10T08:31:52.486 INFO:teuthology.orchestra.run.vm06.stdout:(50/140): python3-urllib3-1.26.5-7.el9.noarch.r 1.4 MB/s | 218 kB 00:00 2026-03-10T08:31:52.540 INFO:teuthology.orchestra.run.vm03.stdout:(66/140): python3-devel-3.9.25-3.el9.x86_64.rpm 541 kB/s | 244 kB 00:00 2026-03-10T08:31:52.669 INFO:teuthology.orchestra.run.vm06.stdout:(51/140): flexiblas-3.0.4-9.el9.x86_64.rpm 164 kB/s | 30 kB 00:00 2026-03-10T08:31:52.780 INFO:teuthology.orchestra.run.vm06.stdout:(52/140): boost-program-options-1.75.0-13.el9.x 346 kB/s | 104 kB 00:00 2026-03-10T08:31:52.788 INFO:teuthology.orchestra.run.vm03.stdout:(67/140): openblas-openmp-0.3.29-1.el9.x86_64.r 5.4 MB/s | 5.3 MB 00:00 2026-03-10T08:31:52.859 INFO:teuthology.orchestra.run.vm06.stdout:(53/140): flexiblas-openblas-openmp-3.0.4-9.el9 187 kB/s | 15 kB 00:00 2026-03-10T08:31:52.953 INFO:teuthology.orchestra.run.vm03.stdout:(68/140): python3-iniconfig-1.1.1-7.el9.noarch. 42 kB/s | 17 kB 00:00 2026-03-10T08:31:53.033 INFO:teuthology.orchestra.run.vm06.stdout:(54/140): libnbd-1.20.3-4.el9.x86_64.rpm 944 kB/s | 164 kB 00:00 2026-03-10T08:31:53.076 INFO:teuthology.orchestra.run.vm06.stdout:(55/140): flexiblas-netlib-3.0.4-9.el9.x86_64.r 7.3 MB/s | 3.0 MB 00:00 2026-03-10T08:31:53.076 INFO:teuthology.orchestra.run.vm03.stdout:(69/140): python3-babel-2.9.1-2.el9.noarch.rpm 4.9 MB/s | 6.0 MB 00:01 2026-03-10T08:31:53.077 INFO:teuthology.orchestra.run.vm03.stdout:(70/140): python3-jinja2-2.11.3-8.el9.noarch.rp 860 kB/s | 249 kB 00:00 2026-03-10T08:31:53.078 INFO:teuthology.orchestra.run.vm03.stdout:(71/140): python3-jmespath-1.0.1-1.el9.noarch.r 379 kB/s | 48 kB 00:00 2026-03-10T08:31:53.126 INFO:teuthology.orchestra.run.vm03.stdout:(72/140): python3-libstoragemgmt-1.10.1-1.el9.x 3.5 MB/s | 177 kB 00:00 2026-03-10T08:31:53.127 INFO:teuthology.orchestra.run.vm03.stdout:(73/140): python3-markupsafe-1.1.1-12.el9.x86_6 711 kB/s | 35 kB 00:00 2026-03-10T08:31:53.129 INFO:teuthology.orchestra.run.vm03.stdout:(74/140): python3-mako-1.1.4-6.el9.noarch.rpm 3.3 MB/s | 172 kB 00:00 2026-03-10T08:31:53.180 INFO:teuthology.orchestra.run.vm03.stdout:(75/140): python3-numpy-f2py-1.23.5-2.el9.x86_6 8.3 MB/s | 442 kB 00:00 2026-03-10T08:31:53.260 INFO:teuthology.orchestra.run.vm03.stdout:(76/140): python3-packaging-20.9-5.el9.noarch.r 593 kB/s | 77 kB 00:00 2026-03-10T08:31:53.261 INFO:teuthology.orchestra.run.vm06.stdout:(56/140): ceph-test-19.2.3-678.ge911bdeb.el9.x8 13 MB/s | 50 MB 00:03 2026-03-10T08:31:53.262 INFO:teuthology.orchestra.run.vm06.stdout:(57/140): librabbitmq-0.11.0-7.el9.x86_64.rpm 244 kB/s | 45 kB 00:00 2026-03-10T08:31:53.262 INFO:teuthology.orchestra.run.vm06.stdout:(58/140): libpmemobj-1.12.1-1.el9.x86_64.rpm 697 kB/s | 160 kB 00:00 2026-03-10T08:31:53.275 INFO:teuthology.orchestra.run.vm03.stdout:(77/140): python3-pluggy-0.13.1-7.el9.noarch.rp 440 kB/s | 41 kB 00:00 2026-03-10T08:31:53.418 INFO:teuthology.orchestra.run.vm03.stdout:(78/140): python3-protobuf-3.14.0-17.el9.noarch 1.7 MB/s | 267 kB 00:00 2026-03-10T08:31:53.454 INFO:teuthology.orchestra.run.vm06.stdout:(59/140): libxslt-1.1.34-12.el9.x86_64.rpm 1.2 MB/s | 233 kB 00:00 2026-03-10T08:31:53.462 INFO:teuthology.orchestra.run.vm06.stdout:(60/140): libstoragemgmt-1.10.1-1.el9.x86_64.rp 1.2 MB/s | 246 kB 00:00 2026-03-10T08:31:53.466 INFO:teuthology.orchestra.run.vm03.stdout:(79/140): python3-py-1.10.0-6.el9.noarch.rpm 2.4 MB/s | 477 kB 00:00 2026-03-10T08:31:53.585 INFO:teuthology.orchestra.run.vm06.stdout:(61/140): lttng-ust-2.12.0-6.el9.x86_64.rpm 2.2 MB/s | 292 kB 00:00 2026-03-10T08:31:53.602 INFO:teuthology.orchestra.run.vm06.stdout:(62/140): lua-5.4.4-4.el9.x86_64.rpm 1.3 MB/s | 188 kB 00:00 2026-03-10T08:31:53.690 INFO:teuthology.orchestra.run.vm06.stdout:(63/140): openblas-0.3.29-1.el9.x86_64.rpm 402 kB/s | 42 kB 00:00 2026-03-10T08:31:53.748 INFO:teuthology.orchestra.run.vm03.stdout:(80/140): python3-pyasn1-0.4.8-7.el9.noarch.rpm 478 kB/s | 157 kB 00:00 2026-03-10T08:31:53.813 INFO:teuthology.orchestra.run.vm03.stdout:(81/140): python3-pyasn1-modules-0.4.8-7.el9.no 803 kB/s | 277 kB 00:00 2026-03-10T08:31:53.824 INFO:teuthology.orchestra.run.vm06.stdout:(64/140): librdkafka-1.6.1-102.el9.x86_64.rpm 1.1 MB/s | 662 kB 00:00 2026-03-10T08:31:53.911 INFO:teuthology.orchestra.run.vm06.stdout:(65/140): openblas-openmp-0.3.29-1.el9.x86_64.r 17 MB/s | 5.3 MB 00:00 2026-03-10T08:31:53.911 INFO:teuthology.orchestra.run.vm03.stdout:(82/140): python3-numpy-1.23.5-2.el9.x86_64.rpm 7.8 MB/s | 6.1 MB 00:00 2026-03-10T08:31:53.914 INFO:teuthology.orchestra.run.vm06.stdout:(66/140): protobuf-3.14.0-17.el9.x86_64.rpm 4.5 MB/s | 1.0 MB 00:00 2026-03-10T08:31:53.936 INFO:teuthology.orchestra.run.vm03.stdout:(83/140): python3-pytest-6.2.2-7.el9.noarch.rpm 2.7 MB/s | 519 kB 00:00 2026-03-10T08:31:53.958 INFO:teuthology.orchestra.run.vm03.stdout:(84/140): python3-requests-oauthlib-1.3.0-12.el 368 kB/s | 54 kB 00:00 2026-03-10T08:31:53.984 INFO:teuthology.orchestra.run.vm03.stdout:(85/140): python3-toml-0.10.2-6.el9.noarch.rpm 873 kB/s | 42 kB 00:00 2026-03-10T08:31:54.071 INFO:teuthology.orchestra.run.vm03.stdout:(86/140): qatlib-25.08.0-2.el9.x86_64.rpm 2.1 MB/s | 240 kB 00:00 2026-03-10T08:31:54.072 INFO:teuthology.orchestra.run.vm06.stdout:(67/140): python3-devel-3.9.25-3.el9.x86_64.rpm 1.5 MB/s | 244 kB 00:00 2026-03-10T08:31:54.078 INFO:teuthology.orchestra.run.vm03.stdout:(87/140): qatlib-service-25.08.0-2.el9.x86_64.r 398 kB/s | 37 kB 00:00 2026-03-10T08:31:54.096 INFO:teuthology.orchestra.run.vm06.stdout:(68/140): python3-iniconfig-1.1.1-7.el9.noarch. 96 kB/s | 17 kB 00:00 2026-03-10T08:31:54.236 INFO:teuthology.orchestra.run.vm06.stdout:(69/140): python3-jmespath-1.0.1-1.el9.noarch.r 341 kB/s | 48 kB 00:00 2026-03-10T08:31:54.271 INFO:teuthology.orchestra.run.vm06.stdout:(70/140): python3-jinja2-2.11.3-8.el9.noarch.rp 1.2 MB/s | 249 kB 00:00 2026-03-10T08:31:54.375 INFO:teuthology.orchestra.run.vm06.stdout:(71/140): python3-libstoragemgmt-1.10.1-1.el9.x 1.2 MB/s | 177 kB 00:00 2026-03-10T08:31:54.381 INFO:teuthology.orchestra.run.vm06.stdout:(72/140): python3-mako-1.1.4-6.el9.noarch.rpm 1.5 MB/s | 172 kB 00:00 2026-03-10T08:31:54.446 INFO:teuthology.orchestra.run.vm03.stdout:(88/140): qatzip-libs-1.3.1-1.el9.x86_64.rpm 178 kB/s | 66 kB 00:00 2026-03-10T08:31:54.451 INFO:teuthology.orchestra.run.vm06.stdout:(73/140): python3-markupsafe-1.1.1-12.el9.x86_6 461 kB/s | 35 kB 00:00 2026-03-10T08:31:54.484 INFO:teuthology.orchestra.run.vm03.stdout:(89/140): socat-1.7.4.1-8.el9.x86_64.rpm 748 kB/s | 303 kB 00:00 2026-03-10T08:31:54.655 INFO:teuthology.orchestra.run.vm03.stdout:(90/140): lua-devel-5.4.4-4.el9.x86_64.rpm 130 kB/s | 22 kB 00:00 2026-03-10T08:31:54.669 INFO:teuthology.orchestra.run.vm06.stdout:(74/140): python3-numpy-f2py-1.23.5-2.el9.x86_6 2.0 MB/s | 442 kB 00:00 2026-03-10T08:31:54.751 INFO:teuthology.orchestra.run.vm06.stdout:(75/140): python3-packaging-20.9-5.el9.noarch.r 941 kB/s | 77 kB 00:00 2026-03-10T08:31:54.776 INFO:teuthology.orchestra.run.vm06.stdout:(76/140): python3-numpy-1.23.5-2.el9.x86_64.rpm 16 MB/s | 6.1 MB 00:00 2026-03-10T08:31:54.810 INFO:teuthology.orchestra.run.vm03.stdout:(91/140): xmlstarlet-1.6.1-20.el9.x86_64.rpm 175 kB/s | 64 kB 00:00 2026-03-10T08:31:54.839 INFO:teuthology.orchestra.run.vm06.stdout:(77/140): python3-pluggy-0.13.1-7.el9.noarch.rp 475 kB/s | 41 kB 00:00 2026-03-10T08:31:54.868 INFO:teuthology.orchestra.run.vm03.stdout:(92/140): protobuf-compiler-3.14.0-17.el9.x86_6 4.0 MB/s | 862 kB 00:00 2026-03-10T08:31:55.048 INFO:teuthology.orchestra.run.vm06.stdout:(78/140): python3-py-1.10.0-6.el9.noarch.rpm 2.2 MB/s | 477 kB 00:00 2026-03-10T08:31:55.147 INFO:teuthology.orchestra.run.vm03.stdout:(93/140): abseil-cpp-20211102.0-4.el9.x86_64.rp 1.6 MB/s | 551 kB 00:00 2026-03-10T08:31:55.148 INFO:teuthology.orchestra.run.vm03.stdout:(94/140): gperftools-libs-2.9.1-3.el9.x86_64.rp 1.1 MB/s | 308 kB 00:00 2026-03-10T08:31:55.168 INFO:teuthology.orchestra.run.vm06.stdout:(79/140): python3-pyasn1-0.4.8-7.el9.noarch.rpm 1.3 MB/s | 157 kB 00:00 2026-03-10T08:31:55.185 INFO:teuthology.orchestra.run.vm06.stdout:(80/140): python3-protobuf-3.14.0-17.el9.noarch 655 kB/s | 267 kB 00:00 2026-03-10T08:31:55.206 INFO:teuthology.orchestra.run.vm03.stdout:(95/140): grpc-data-1.46.7-10.el9.noarch.rpm 331 kB/s | 19 kB 00:00 2026-03-10T08:31:55.263 INFO:teuthology.orchestra.run.vm03.stdout:(96/140): libarrow-doc-9.0.0-15.el9.noarch.rpm 435 kB/s | 25 kB 00:00 2026-03-10T08:31:55.319 INFO:teuthology.orchestra.run.vm03.stdout:(97/140): liboath-2.6.12-1.el9.x86_64.rpm 882 kB/s | 49 kB 00:00 2026-03-10T08:31:55.333 INFO:teuthology.orchestra.run.vm06.stdout:(81/140): python3-pyasn1-modules-0.4.8-7.el9.no 1.6 MB/s | 277 kB 00:00 2026-03-10T08:31:55.370 INFO:teuthology.orchestra.run.vm03.stdout:(98/140): libunwind-1.6.2-1.el9.x86_64.rpm 1.3 MB/s | 67 kB 00:00 2026-03-10T08:31:55.400 INFO:teuthology.orchestra.run.vm06.stdout:(82/140): python3-pytest-6.2.2-7.el9.noarch.rpm 2.4 MB/s | 519 kB 00:00 2026-03-10T08:31:55.410 INFO:teuthology.orchestra.run.vm06.stdout:(83/140): python3-requests-oauthlib-1.3.0-12.el 704 kB/s | 54 kB 00:00 2026-03-10T08:31:55.410 INFO:teuthology.orchestra.run.vm03.stdout:(99/140): libarrow-9.0.0-15.el9.x86_64.rpm 17 MB/s | 4.4 MB 00:00 2026-03-10T08:31:55.435 INFO:teuthology.orchestra.run.vm03.stdout:(100/140): luarocks-3.9.2-5.el9.noarch.rpm 2.3 MB/s | 151 kB 00:00 2026-03-10T08:31:55.503 INFO:teuthology.orchestra.run.vm03.stdout:(101/140): parquet-libs-9.0.0-15.el9.x86_64.rpm 8.9 MB/s | 838 kB 00:00 2026-03-10T08:31:55.526 INFO:teuthology.orchestra.run.vm03.stdout:(102/140): python3-asyncssh-2.13.2-5.el9.noarch 5.9 MB/s | 548 kB 00:00 2026-03-10T08:31:55.549 INFO:teuthology.orchestra.run.vm03.stdout:(103/140): python3-autocommand-2.2.2-8.el9.noar 645 kB/s | 29 kB 00:00 2026-03-10T08:31:55.576 INFO:teuthology.orchestra.run.vm03.stdout:(104/140): python3-backports-tarfile-1.2.0-1.el 1.2 MB/s | 60 kB 00:00 2026-03-10T08:31:55.607 INFO:teuthology.orchestra.run.vm03.stdout:(105/140): python3-bcrypt-3.2.2-1.el9.x86_64.rp 748 kB/s | 43 kB 00:00 2026-03-10T08:31:55.631 INFO:teuthology.orchestra.run.vm03.stdout:(106/140): python3-cachetools-4.2.4-1.el9.noarc 584 kB/s | 32 kB 00:00 2026-03-10T08:31:55.657 INFO:teuthology.orchestra.run.vm06.stdout:(84/140): python3-toml-0.10.2-6.el9.noarch.rpm 169 kB/s | 42 kB 00:00 2026-03-10T08:31:55.657 INFO:teuthology.orchestra.run.vm03.stdout:(107/140): python3-certifi-2023.05.07-4.el9.noa 283 kB/s | 14 kB 00:00 2026-03-10T08:31:55.694 INFO:teuthology.orchestra.run.vm03.stdout:(108/140): python3-cheroot-10.0.1-4.el9.noarch. 2.7 MB/s | 173 kB 00:00 2026-03-10T08:31:55.752 INFO:teuthology.orchestra.run.vm03.stdout:(109/140): python3-scipy-1.9.3-2.el9.x86_64.rpm 10 MB/s | 19 MB 00:01 2026-03-10T08:31:55.752 INFO:teuthology.orchestra.run.vm06.stdout:(85/140): qatlib-25.08.0-2.el9.x86_64.rpm 2.5 MB/s | 240 kB 00:00 2026-03-10T08:31:55.753 INFO:teuthology.orchestra.run.vm03.stdout:(110/140): python3-cherrypy-18.6.1-2.el9.noarch 3.6 MB/s | 358 kB 00:00 2026-03-10T08:31:55.769 INFO:teuthology.orchestra.run.vm06.stdout:(86/140): python3-babel-2.9.1-2.el9.noarch.rpm 3.1 MB/s | 6.0 MB 00:01 2026-03-10T08:31:55.773 INFO:teuthology.orchestra.run.vm03.stdout:(111/140): python3-google-auth-2.45.0-1.el9.noa 3.2 MB/s | 254 kB 00:00 2026-03-10T08:31:55.812 INFO:teuthology.orchestra.run.vm03.stdout:(112/140): python3-grpcio-tools-1.46.7-10.el9.x 2.4 MB/s | 144 kB 00:00 2026-03-10T08:31:55.825 INFO:teuthology.orchestra.run.vm03.stdout:(113/140): python3-jaraco-8.2.1-3.el9.noarch.rp 204 kB/s | 11 kB 00:00 2026-03-10T08:31:55.855 INFO:teuthology.orchestra.run.vm06.stdout:(87/140): qatlib-service-25.08.0-2.el9.x86_64.r 359 kB/s | 37 kB 00:00 2026-03-10T08:31:55.863 INFO:teuthology.orchestra.run.vm06.stdout:(88/140): qatzip-libs-1.3.1-1.el9.x86_64.rpm 708 kB/s | 66 kB 00:00 2026-03-10T08:31:55.876 INFO:teuthology.orchestra.run.vm03.stdout:(114/140): python3-jaraco-classes-3.2.1-5.el9.n 282 kB/s | 18 kB 00:00 2026-03-10T08:31:55.877 INFO:teuthology.orchestra.run.vm03.stdout:(115/140): python3-jaraco-collections-3.0.0-8.e 446 kB/s | 23 kB 00:00 2026-03-10T08:31:55.922 INFO:teuthology.orchestra.run.vm03.stdout:(116/140): python3-jaraco-context-6.0.1-3.el9.n 430 kB/s | 20 kB 00:00 2026-03-10T08:31:55.937 INFO:teuthology.orchestra.run.vm03.stdout:(117/140): python3-jaraco-functools-3.5.0-2.el9 328 kB/s | 19 kB 00:00 2026-03-10T08:31:55.948 INFO:teuthology.orchestra.run.vm06.stdout:(89/140): socat-1.7.4.1-8.el9.x86_64.rpm 3.2 MB/s | 303 kB 00:00 2026-03-10T08:31:55.977 INFO:teuthology.orchestra.run.vm03.stdout:(118/140): python3-jaraco-text-4.0.0-2.el9.noar 478 kB/s | 26 kB 00:00 2026-03-10T08:31:56.033 INFO:teuthology.orchestra.run.vm03.stdout:(119/140): python3-kubernetes-26.1.0-3.el9.noar 11 MB/s | 1.0 MB 00:00 2026-03-10T08:31:56.033 INFO:teuthology.orchestra.run.vm03.stdout:(120/140): python3-logutils-0.3.5-21.el9.noarch 831 kB/s | 46 kB 00:00 2026-03-10T08:31:56.034 INFO:teuthology.orchestra.run.vm06.stdout:(90/140): xmlstarlet-1.6.1-20.el9.x86_64.rpm 372 kB/s | 64 kB 00:00 2026-03-10T08:31:56.083 INFO:teuthology.orchestra.run.vm03.stdout:(121/140): python3-natsort-7.1.1-5.el9.noarch.r 1.1 MB/s | 58 kB 00:00 2026-03-10T08:31:56.090 INFO:teuthology.orchestra.run.vm03.stdout:(122/140): python3-more-itertools-8.12.0-2.el9. 1.4 MB/s | 79 kB 00:00 2026-03-10T08:31:56.106 INFO:teuthology.orchestra.run.vm06.stdout:(91/140): lua-devel-5.4.4-4.el9.x86_64.rpm 141 kB/s | 22 kB 00:00 2026-03-10T08:31:56.143 INFO:teuthology.orchestra.run.vm03.stdout:(123/140): python3-portend-3.1.0-2.el9.noarch.r 307 kB/s | 16 kB 00:00 2026-03-10T08:31:56.155 INFO:teuthology.orchestra.run.vm03.stdout:(124/140): python3-pecan-1.4.2-3.el9.noarch.rpm 3.7 MB/s | 272 kB 00:00 2026-03-10T08:31:56.181 INFO:teuthology.orchestra.run.vm03.stdout:(125/140): python3-grpcio-1.46.7-10.el9.x86_64. 4.8 MB/s | 2.0 MB 00:00 2026-03-10T08:31:56.207 INFO:teuthology.orchestra.run.vm03.stdout:(126/140): python3-pyOpenSSL-21.0.0-1.el9.noarc 1.4 MB/s | 90 kB 00:00 2026-03-10T08:31:56.208 INFO:teuthology.orchestra.run.vm03.stdout:(127/140): python3-repoze-lru-0.7-16.el9.noarch 582 kB/s | 31 kB 00:00 2026-03-10T08:31:56.246 INFO:teuthology.orchestra.run.vm03.stdout:(128/140): python3-routes-2.5.1-5.el9.noarch.rp 2.8 MB/s | 188 kB 00:00 2026-03-10T08:31:56.255 INFO:teuthology.orchestra.run.vm03.stdout:(129/140): python3-rsa-4.9-2.el9.noarch.rpm 1.2 MB/s | 59 kB 00:00 2026-03-10T08:31:56.272 INFO:teuthology.orchestra.run.vm03.stdout:(130/140): python3-tempora-5.0.0-2.el9.noarch.r 563 kB/s | 36 kB 00:00 2026-03-10T08:31:56.306 INFO:teuthology.orchestra.run.vm03.stdout:(131/140): python3-typing-extensions-4.15.0-1.e 1.4 MB/s | 86 kB 00:00 2026-03-10T08:31:56.324 INFO:teuthology.orchestra.run.vm03.stdout:(132/140): python3-webob-1.8.8-2.el9.noarch.rpm 3.3 MB/s | 230 kB 00:00 2026-03-10T08:31:56.327 INFO:teuthology.orchestra.run.vm03.stdout:(133/140): python3-websocket-client-1.2.3-2.el9 1.6 MB/s | 90 kB 00:00 2026-03-10T08:31:56.375 INFO:teuthology.orchestra.run.vm03.stdout:(134/140): python3-xmltodict-0.12.0-15.el9.noar 436 kB/s | 22 kB 00:00 2026-03-10T08:31:56.385 INFO:teuthology.orchestra.run.vm03.stdout:(135/140): python3-zc-lockfile-2.0-10.el9.noarc 346 kB/s | 20 kB 00:00 2026-03-10T08:31:56.391 INFO:teuthology.orchestra.run.vm03.stdout:(136/140): python3-werkzeug-2.0.3-3.el9.1.noarc 4.9 MB/s | 427 kB 00:00 2026-03-10T08:31:56.446 INFO:teuthology.orchestra.run.vm06.stdout:(92/140): protobuf-compiler-3.14.0-17.el9.x86_6 2.0 MB/s | 862 kB 00:00 2026-03-10T08:31:56.449 INFO:teuthology.orchestra.run.vm03.stdout:(137/140): re2-20211101-20.el9.x86_64.rpm 2.5 MB/s | 191 kB 00:00 2026-03-10T08:31:56.861 INFO:teuthology.orchestra.run.vm06.stdout:(93/140): gperftools-libs-2.9.1-3.el9.x86_64.rp 742 kB/s | 308 kB 00:00 2026-03-10T08:31:56.886 INFO:teuthology.orchestra.run.vm06.stdout:(94/140): abseil-cpp-20211102.0-4.el9.x86_64.rp 707 kB/s | 551 kB 00:00 2026-03-10T08:31:56.891 INFO:teuthology.orchestra.run.vm06.stdout:(95/140): grpc-data-1.46.7-10.el9.noarch.rpm 645 kB/s | 19 kB 00:00 2026-03-10T08:31:56.969 INFO:teuthology.orchestra.run.vm06.stdout:(96/140): python3-scipy-1.9.3-2.el9.x86_64.rpm 12 MB/s | 19 MB 00:01 2026-03-10T08:31:56.997 INFO:teuthology.orchestra.run.vm03.stdout:(138/140): thrift-0.15.0-4.el9.x86_64.rpm 2.6 MB/s | 1.6 MB 00:00 2026-03-10T08:31:57.132 INFO:teuthology.orchestra.run.vm06.stdout:(97/140): libarrow-9.0.0-15.el9.x86_64.rpm 18 MB/s | 4.4 MB 00:00 2026-03-10T08:31:57.134 INFO:teuthology.orchestra.run.vm06.stdout:(98/140): libarrow-doc-9.0.0-15.el9.noarch.rpm 102 kB/s | 25 kB 00:00 2026-03-10T08:31:57.134 INFO:teuthology.orchestra.run.vm06.stdout:(99/140): liboath-2.6.12-1.el9.x86_64.rpm 297 kB/s | 49 kB 00:00 2026-03-10T08:31:57.165 INFO:teuthology.orchestra.run.vm06.stdout:(100/140): libunwind-1.6.2-1.el9.x86_64.rpm 2.0 MB/s | 67 kB 00:00 2026-03-10T08:31:57.175 INFO:teuthology.orchestra.run.vm06.stdout:(101/140): parquet-libs-9.0.0-15.el9.x86_64.rpm 21 MB/s | 838 kB 00:00 2026-03-10T08:31:57.225 INFO:teuthology.orchestra.run.vm06.stdout:(102/140): luarocks-3.9.2-5.el9.noarch.rpm 1.6 MB/s | 151 kB 00:00 2026-03-10T08:31:57.227 INFO:teuthology.orchestra.run.vm06.stdout:(103/140): python3-asyncssh-2.13.2-5.el9.noarch 8.7 MB/s | 548 kB 00:00 2026-03-10T08:31:57.228 INFO:teuthology.orchestra.run.vm06.stdout:(104/140): python3-autocommand-2.2.2-8.el9.noar 558 kB/s | 29 kB 00:00 2026-03-10T08:31:57.259 INFO:teuthology.orchestra.run.vm06.stdout:(105/140): python3-cachetools-4.2.4-1.el9.noarc 1.0 MB/s | 32 kB 00:00 2026-03-10T08:31:57.259 INFO:teuthology.orchestra.run.vm06.stdout:(106/140): python3-bcrypt-3.2.2-1.el9.x86_64.rp 1.4 MB/s | 43 kB 00:00 2026-03-10T08:31:57.260 INFO:teuthology.orchestra.run.vm06.stdout:(107/140): python3-backports-tarfile-1.2.0-1.el 1.8 MB/s | 60 kB 00:00 2026-03-10T08:31:57.289 INFO:teuthology.orchestra.run.vm06.stdout:(108/140): python3-certifi-2023.05.07-4.el9.noa 466 kB/s | 14 kB 00:00 2026-03-10T08:31:57.291 INFO:teuthology.orchestra.run.vm06.stdout:(109/140): python3-cheroot-10.0.1-4.el9.noarch. 5.4 MB/s | 173 kB 00:00 2026-03-10T08:31:57.294 INFO:teuthology.orchestra.run.vm06.stdout:(110/140): python3-cherrypy-18.6.1-2.el9.noarch 10 MB/s | 358 kB 00:00 2026-03-10T08:31:57.325 INFO:teuthology.orchestra.run.vm06.stdout:(111/140): python3-google-auth-2.45.0-1.el9.noa 6.9 MB/s | 254 kB 00:00 2026-03-10T08:31:57.373 INFO:teuthology.orchestra.run.vm03.stdout:(139/140): librbd1-19.2.3-678.ge911bdeb.el9.x86 3.4 MB/s | 3.2 MB 00:00 2026-03-10T08:31:57.422 INFO:teuthology.orchestra.run.vm06.stdout:(112/140): python3-grpcio-1.46.7-10.el9.x86_64. 16 MB/s | 2.0 MB 00:00 2026-03-10T08:31:57.423 INFO:teuthology.orchestra.run.vm06.stdout:(113/140): python3-grpcio-tools-1.46.7-10.el9.x 1.1 MB/s | 144 kB 00:00 2026-03-10T08:31:57.423 INFO:teuthology.orchestra.run.vm06.stdout:(114/140): python3-jaraco-8.2.1-3.el9.noarch.rp 109 kB/s | 11 kB 00:00 2026-03-10T08:31:57.456 INFO:teuthology.orchestra.run.vm06.stdout:(115/140): python3-jaraco-classes-3.2.1-5.el9.n 514 kB/s | 18 kB 00:00 2026-03-10T08:31:57.456 INFO:teuthology.orchestra.run.vm03.stdout:(140/140): librados2-19.2.3-678.ge911bdeb.el9.x 3.2 MB/s | 3.4 MB 00:01 2026-03-10T08:31:57.458 INFO:teuthology.orchestra.run.vm06.stdout:(116/140): python3-jaraco-collections-3.0.0-8.e 672 kB/s | 23 kB 00:00 2026-03-10T08:31:57.458 INFO:teuthology.orchestra.run.vm06.stdout:(117/140): python3-jaraco-context-6.0.1-3.el9.n 561 kB/s | 20 kB 00:00 2026-03-10T08:31:57.461 INFO:teuthology.orchestra.run.vm03.stdout:-------------------------------------------------------------------------------- 2026-03-10T08:31:57.461 INFO:teuthology.orchestra.run.vm03.stdout:Total 15 MB/s | 211 MB 00:13 2026-03-10T08:31:57.486 INFO:teuthology.orchestra.run.vm06.stdout:(118/140): python3-jaraco-functools-3.5.0-2.el9 651 kB/s | 19 kB 00:00 2026-03-10T08:31:57.488 INFO:teuthology.orchestra.run.vm06.stdout:(119/140): python3-jaraco-text-4.0.0-2.el9.noar 887 kB/s | 26 kB 00:00 2026-03-10T08:31:57.500 INFO:teuthology.orchestra.run.vm06.stdout:(120/140): python3-kubernetes-26.1.0-3.el9.noar 25 MB/s | 1.0 MB 00:00 2026-03-10T08:31:57.517 INFO:teuthology.orchestra.run.vm06.stdout:(121/140): python3-logutils-0.3.5-21.el9.noarch 1.5 MB/s | 46 kB 00:00 2026-03-10T08:31:57.518 INFO:teuthology.orchestra.run.vm06.stdout:(122/140): python3-more-itertools-8.12.0-2.el9. 2.6 MB/s | 79 kB 00:00 2026-03-10T08:31:57.531 INFO:teuthology.orchestra.run.vm06.stdout:(123/140): python3-natsort-7.1.1-5.el9.noarch.r 1.8 MB/s | 58 kB 00:00 2026-03-10T08:31:57.550 INFO:teuthology.orchestra.run.vm06.stdout:(124/140): python3-pecan-1.4.2-3.el9.noarch.rpm 8.1 MB/s | 272 kB 00:00 2026-03-10T08:31:57.550 INFO:teuthology.orchestra.run.vm06.stdout:(125/140): python3-portend-3.1.0-2.el9.noarch.r 514 kB/s | 16 kB 00:00 2026-03-10T08:31:57.562 INFO:teuthology.orchestra.run.vm06.stdout:(126/140): python3-pyOpenSSL-21.0.0-1.el9.noarc 2.9 MB/s | 90 kB 00:00 2026-03-10T08:31:57.585 INFO:teuthology.orchestra.run.vm06.stdout:(127/140): python3-repoze-lru-0.7-16.el9.noarch 872 kB/s | 31 kB 00:00 2026-03-10T08:31:57.586 INFO:teuthology.orchestra.run.vm06.stdout:(128/140): python3-routes-2.5.1-5.el9.noarch.rp 5.2 MB/s | 188 kB 00:00 2026-03-10T08:31:57.592 INFO:teuthology.orchestra.run.vm06.stdout:(129/140): python3-rsa-4.9-2.el9.noarch.rpm 1.9 MB/s | 59 kB 00:00 2026-03-10T08:31:57.618 INFO:teuthology.orchestra.run.vm06.stdout:(130/140): python3-tempora-5.0.0-2.el9.noarch.r 1.1 MB/s | 36 kB 00:00 2026-03-10T08:31:57.618 INFO:teuthology.orchestra.run.vm06.stdout:(131/140): python3-typing-extensions-4.15.0-1.e 2.7 MB/s | 86 kB 00:00 2026-03-10T08:31:57.624 INFO:teuthology.orchestra.run.vm06.stdout:(132/140): python3-webob-1.8.8-2.el9.noarch.rpm 7.0 MB/s | 230 kB 00:00 2026-03-10T08:31:57.648 INFO:teuthology.orchestra.run.vm06.stdout:(133/140): python3-websocket-client-1.2.3-2.el9 2.9 MB/s | 90 kB 00:00 2026-03-10T08:31:57.652 INFO:teuthology.orchestra.run.vm06.stdout:(134/140): python3-werkzeug-2.0.3-3.el9.1.noarc 12 MB/s | 427 kB 00:00 2026-03-10T08:31:57.654 INFO:teuthology.orchestra.run.vm06.stdout:(135/140): python3-xmltodict-0.12.0-15.el9.noar 745 kB/s | 22 kB 00:00 2026-03-10T08:31:57.678 INFO:teuthology.orchestra.run.vm06.stdout:(136/140): python3-zc-lockfile-2.0-10.el9.noarc 665 kB/s | 20 kB 00:00 2026-03-10T08:31:57.685 INFO:teuthology.orchestra.run.vm06.stdout:(137/140): re2-20211101-20.el9.x86_64.rpm 5.8 MB/s | 191 kB 00:00 2026-03-10T08:31:57.746 INFO:teuthology.orchestra.run.vm06.stdout:(138/140): thrift-0.15.0-4.el9.x86_64.rpm 17 MB/s | 1.6 MB 00:00 2026-03-10T08:31:58.189 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction check 2026-03-10T08:31:58.239 INFO:teuthology.orchestra.run.vm03.stdout:Transaction check succeeded. 2026-03-10T08:31:58.239 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction test 2026-03-10T08:31:58.636 INFO:teuthology.orchestra.run.vm06.stdout:(139/140): librbd1-19.2.3-678.ge911bdeb.el9.x86 3.3 MB/s | 3.2 MB 00:00 2026-03-10T08:31:58.699 INFO:teuthology.orchestra.run.vm06.stdout:(140/140): librados2-19.2.3-678.ge911bdeb.el9.x 3.4 MB/s | 3.4 MB 00:01 2026-03-10T08:31:58.703 INFO:teuthology.orchestra.run.vm06.stdout:-------------------------------------------------------------------------------- 2026-03-10T08:31:58.703 INFO:teuthology.orchestra.run.vm06.stdout:Total 15 MB/s | 211 MB 00:13 2026-03-10T08:31:59.087 INFO:teuthology.orchestra.run.vm03.stdout:Transaction test succeeded. 2026-03-10T08:31:59.087 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction 2026-03-10T08:31:59.326 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction check 2026-03-10T08:31:59.385 INFO:teuthology.orchestra.run.vm06.stdout:Transaction check succeeded. 2026-03-10T08:31:59.385 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction test 2026-03-10T08:32:00.010 INFO:teuthology.orchestra.run.vm03.stdout: Preparing : 1/1 2026-03-10T08:32:00.036 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-more-itertools-8.12.0-2.el9.noarch 1/142 2026-03-10T08:32:00.049 INFO:teuthology.orchestra.run.vm03.stdout: Installing : thrift-0.15.0-4.el9.x86_64 2/142 2026-03-10T08:32:00.226 INFO:teuthology.orchestra.run.vm03.stdout: Installing : lttng-ust-2.12.0-6.el9.x86_64 3/142 2026-03-10T08:32:00.228 INFO:teuthology.orchestra.run.vm03.stdout: Upgrading : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/142 2026-03-10T08:32:00.276 INFO:teuthology.orchestra.run.vm06.stdout:Transaction test succeeded. 2026-03-10T08:32:00.276 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction 2026-03-10T08:32:00.297 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/142 2026-03-10T08:32:00.313 INFO:teuthology.orchestra.run.vm03.stdout: Installing : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 5/142 2026-03-10T08:32:00.347 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 5/142 2026-03-10T08:32:00.357 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 6/142 2026-03-10T08:32:00.360 INFO:teuthology.orchestra.run.vm03.stdout: Installing : librdkafka-1.6.1-102.el9.x86_64 7/142 2026-03-10T08:32:00.363 INFO:teuthology.orchestra.run.vm03.stdout: Installing : librabbitmq-0.11.0-7.el9.x86_64 8/142 2026-03-10T08:32:00.375 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-jaraco-8.2.1-3.el9.noarch 9/142 2026-03-10T08:32:00.381 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-packaging-20.9-5.el9.noarch 10/142 2026-03-10T08:32:00.392 INFO:teuthology.orchestra.run.vm03.stdout: Installing : libnbd-1.20.3-4.el9.x86_64 11/142 2026-03-10T08:32:00.393 INFO:teuthology.orchestra.run.vm03.stdout: Installing : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 12/142 2026-03-10T08:32:00.433 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 12/142 2026-03-10T08:32:00.435 INFO:teuthology.orchestra.run.vm03.stdout: Installing : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 13/142 2026-03-10T08:32:00.453 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 13/142 2026-03-10T08:32:00.490 INFO:teuthology.orchestra.run.vm03.stdout: Installing : re2-1:20211101-20.el9.x86_64 14/142 2026-03-10T08:32:00.529 INFO:teuthology.orchestra.run.vm03.stdout: Installing : libarrow-9.0.0-15.el9.x86_64 15/142 2026-03-10T08:32:00.536 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-werkzeug-2.0.3-3.el9.1.noarch 16/142 2026-03-10T08:32:00.543 INFO:teuthology.orchestra.run.vm03.stdout: Installing : liboath-2.6.12-1.el9.x86_64 17/142 2026-03-10T08:32:00.548 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-toml-0.10.2-6.el9.noarch 18/142 2026-03-10T08:32:00.574 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-jaraco-functools-3.5.0-2.el9.noarch 19/142 2026-03-10T08:32:00.584 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-pyasn1-0.4.8-7.el9.noarch 20/142 2026-03-10T08:32:00.595 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-markupsafe-1.1.1-12.el9.x86_64 21/142 2026-03-10T08:32:00.601 INFO:teuthology.orchestra.run.vm03.stdout: Installing : protobuf-3.14.0-17.el9.x86_64 22/142 2026-03-10T08:32:00.607 INFO:teuthology.orchestra.run.vm03.stdout: Installing : lua-5.4.4-4.el9.x86_64 23/142 2026-03-10T08:32:00.612 INFO:teuthology.orchestra.run.vm03.stdout: Installing : flexiblas-3.0.4-9.el9.x86_64 24/142 2026-03-10T08:32:00.641 INFO:teuthology.orchestra.run.vm03.stdout: Installing : unzip-6.0-59.el9.x86_64 25/142 2026-03-10T08:32:00.658 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-urllib3-1.26.5-7.el9.noarch 26/142 2026-03-10T08:32:00.662 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-requests-2.25.1-10.el9.noarch 27/142 2026-03-10T08:32:00.670 INFO:teuthology.orchestra.run.vm03.stdout: Installing : libquadmath-11.5.0-14.el9.x86_64 28/142 2026-03-10T08:32:00.672 INFO:teuthology.orchestra.run.vm03.stdout: Installing : libgfortran-11.5.0-14.el9.x86_64 29/142 2026-03-10T08:32:00.702 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ledmon-libs-1.1.0-3.el9.x86_64 30/142 2026-03-10T08:32:00.710 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 31/142 2026-03-10T08:32:00.721 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9 32/142 2026-03-10T08:32:00.738 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 33/142 2026-03-10T08:32:00.746 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-requests-oauthlib-1.3.0-12.el9.noarch 34/142 2026-03-10T08:32:00.778 INFO:teuthology.orchestra.run.vm03.stdout: Installing : zip-3.0-35.el9.x86_64 35/142 2026-03-10T08:32:00.784 INFO:teuthology.orchestra.run.vm03.stdout: Installing : luarocks-3.9.2-5.el9.noarch 36/142 2026-03-10T08:32:00.793 INFO:teuthology.orchestra.run.vm03.stdout: Installing : lua-devel-5.4.4-4.el9.x86_64 37/142 2026-03-10T08:32:00.826 INFO:teuthology.orchestra.run.vm03.stdout: Installing : protobuf-compiler-3.14.0-17.el9.x86_64 38/142 2026-03-10T08:32:00.888 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-mako-1.1.4-6.el9.noarch 39/142 2026-03-10T08:32:00.904 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-pyasn1-modules-0.4.8-7.el9.noarch 40/142 2026-03-10T08:32:00.915 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-rsa-4.9-2.el9.noarch 41/142 2026-03-10T08:32:00.921 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-tempora-5.0.0-2.el9.noarch 42/142 2026-03-10T08:32:00.928 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-portend-3.1.0-2.el9.noarch 43/142 2026-03-10T08:32:00.939 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-jaraco-classes-3.2.1-5.el9.noarch 44/142 2026-03-10T08:32:00.945 INFO:teuthology.orchestra.run.vm03.stdout: Installing : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 45/142 2026-03-10T08:32:00.950 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-zc-lockfile-2.0-10.el9.noarch 46/142 2026-03-10T08:32:00.968 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-xmltodict-0.12.0-15.el9.noarch 47/142 2026-03-10T08:32:00.994 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-websocket-client-1.2.3-2.el9.noarch 48/142 2026-03-10T08:32:01.001 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-webob-1.8.8-2.el9.noarch 49/142 2026-03-10T08:32:01.009 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-typing-extensions-4.15.0-1.el9.noarch 50/142 2026-03-10T08:32:01.023 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-repoze-lru-0.7-16.el9.noarch 51/142 2026-03-10T08:32:01.036 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-routes-2.5.1-5.el9.noarch 52/142 2026-03-10T08:32:01.051 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-natsort-7.1.1-5.el9.noarch 53/142 2026-03-10T08:32:01.117 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-logutils-0.3.5-21.el9.noarch 54/142 2026-03-10T08:32:01.126 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-pecan-1.4.2-3.el9.noarch 55/142 2026-03-10T08:32:01.138 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-certifi-2023.05.07-4.el9.noarch 56/142 2026-03-10T08:32:01.187 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-cachetools-4.2.4-1.el9.noarch 57/142 2026-03-10T08:32:01.283 INFO:teuthology.orchestra.run.vm06.stdout: Preparing : 1/1 2026-03-10T08:32:01.301 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-more-itertools-8.12.0-2.el9.noarch 1/142 2026-03-10T08:32:01.315 INFO:teuthology.orchestra.run.vm06.stdout: Installing : thrift-0.15.0-4.el9.x86_64 2/142 2026-03-10T08:32:01.492 INFO:teuthology.orchestra.run.vm06.stdout: Installing : lttng-ust-2.12.0-6.el9.x86_64 3/142 2026-03-10T08:32:01.495 INFO:teuthology.orchestra.run.vm06.stdout: Upgrading : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/142 2026-03-10T08:32:01.556 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/142 2026-03-10T08:32:01.558 INFO:teuthology.orchestra.run.vm06.stdout: Installing : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 5/142 2026-03-10T08:32:01.590 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 5/142 2026-03-10T08:32:01.590 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-google-auth-1:2.45.0-1.el9.noarch 58/142 2026-03-10T08:32:01.600 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 6/142 2026-03-10T08:32:01.605 INFO:teuthology.orchestra.run.vm06.stdout: Installing : librdkafka-1.6.1-102.el9.x86_64 7/142 2026-03-10T08:32:01.607 INFO:teuthology.orchestra.run.vm06.stdout: Installing : librabbitmq-0.11.0-7.el9.x86_64 8/142 2026-03-10T08:32:01.608 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-kubernetes-1:26.1.0-3.el9.noarch 59/142 2026-03-10T08:32:01.615 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-backports-tarfile-1.2.0-1.el9.noarch 60/142 2026-03-10T08:32:01.620 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-jaraco-8.2.1-3.el9.noarch 9/142 2026-03-10T08:32:01.624 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-jaraco-context-6.0.1-3.el9.noarch 61/142 2026-03-10T08:32:01.627 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-packaging-20.9-5.el9.noarch 10/142 2026-03-10T08:32:01.633 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-autocommand-2.2.2-8.el9.noarch 62/142 2026-03-10T08:32:01.638 INFO:teuthology.orchestra.run.vm06.stdout: Installing : libnbd-1.20.3-4.el9.x86_64 11/142 2026-03-10T08:32:01.639 INFO:teuthology.orchestra.run.vm06.stdout: Installing : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 12/142 2026-03-10T08:32:01.639 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-jaraco-text-4.0.0-2.el9.noarch 63/142 2026-03-10T08:32:01.644 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-jaraco-collections-3.0.0-8.el9.noarch 64/142 2026-03-10T08:32:01.654 INFO:teuthology.orchestra.run.vm03.stdout: Installing : libunwind-1.6.2-1.el9.x86_64 65/142 2026-03-10T08:32:01.658 INFO:teuthology.orchestra.run.vm03.stdout: Installing : gperftools-libs-2.9.1-3.el9.x86_64 66/142 2026-03-10T08:32:01.661 INFO:teuthology.orchestra.run.vm03.stdout: Installing : libarrow-doc-9.0.0-15.el9.noarch 67/142 2026-03-10T08:32:01.676 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 12/142 2026-03-10T08:32:01.678 INFO:teuthology.orchestra.run.vm06.stdout: Installing : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 13/142 2026-03-10T08:32:01.693 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 13/142 2026-03-10T08:32:01.694 INFO:teuthology.orchestra.run.vm03.stdout: Installing : grpc-data-1.46.7-10.el9.noarch 68/142 2026-03-10T08:32:01.730 INFO:teuthology.orchestra.run.vm06.stdout: Installing : re2-1:20211101-20.el9.x86_64 14/142 2026-03-10T08:32:01.747 INFO:teuthology.orchestra.run.vm03.stdout: Installing : abseil-cpp-20211102.0-4.el9.x86_64 69/142 2026-03-10T08:32:01.761 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-grpcio-1.46.7-10.el9.x86_64 70/142 2026-03-10T08:32:01.768 INFO:teuthology.orchestra.run.vm06.stdout: Installing : libarrow-9.0.0-15.el9.x86_64 15/142 2026-03-10T08:32:01.774 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-werkzeug-2.0.3-3.el9.1.noarch 16/142 2026-03-10T08:32:01.782 INFO:teuthology.orchestra.run.vm06.stdout: Installing : liboath-2.6.12-1.el9.x86_64 17/142 2026-03-10T08:32:01.788 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-toml-0.10.2-6.el9.noarch 18/142 2026-03-10T08:32:01.817 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-jaraco-functools-3.5.0-2.el9.noarch 19/142 2026-03-10T08:32:01.817 INFO:teuthology.orchestra.run.vm03.stdout: Installing : socat-1.7.4.1-8.el9.x86_64 71/142 2026-03-10T08:32:01.826 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-pyasn1-0.4.8-7.el9.noarch 20/142 2026-03-10T08:32:01.839 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-markupsafe-1.1.1-12.el9.x86_64 21/142 2026-03-10T08:32:01.847 INFO:teuthology.orchestra.run.vm06.stdout: Installing : protobuf-3.14.0-17.el9.x86_64 22/142 2026-03-10T08:32:01.852 INFO:teuthology.orchestra.run.vm06.stdout: Installing : lua-5.4.4-4.el9.x86_64 23/142 2026-03-10T08:32:01.855 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-py-1.10.0-6.el9.noarch 72/142 2026-03-10T08:32:01.857 INFO:teuthology.orchestra.run.vm06.stdout: Installing : flexiblas-3.0.4-9.el9.x86_64 24/142 2026-03-10T08:32:01.870 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-protobuf-3.14.0-17.el9.noarch 73/142 2026-03-10T08:32:01.882 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-grpcio-tools-1.46.7-10.el9.x86_64 74/142 2026-03-10T08:32:01.889 INFO:teuthology.orchestra.run.vm06.stdout: Installing : unzip-6.0-59.el9.x86_64 25/142 2026-03-10T08:32:01.890 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-pluggy-0.13.1-7.el9.noarch 75/142 2026-03-10T08:32:01.914 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-urllib3-1.26.5-7.el9.noarch 26/142 2026-03-10T08:32:01.919 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-requests-2.25.1-10.el9.noarch 27/142 2026-03-10T08:32:01.927 INFO:teuthology.orchestra.run.vm06.stdout: Installing : libquadmath-11.5.0-14.el9.x86_64 28/142 2026-03-10T08:32:01.930 INFO:teuthology.orchestra.run.vm06.stdout: Installing : libgfortran-11.5.0-14.el9.x86_64 29/142 2026-03-10T08:32:01.934 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-iniconfig-1.1.1-7.el9.noarch 76/142 2026-03-10T08:32:01.972 INFO:teuthology.orchestra.run.vm06.stdout: Installing : ledmon-libs-1.1.0-3.el9.x86_64 30/142 2026-03-10T08:32:01.981 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 31/142 2026-03-10T08:32:02.001 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9 32/142 2026-03-10T08:32:02.018 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 33/142 2026-03-10T08:32:02.028 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-requests-oauthlib-1.3.0-12.el9.noarch 34/142 2026-03-10T08:32:02.062 INFO:teuthology.orchestra.run.vm06.stdout: Installing : zip-3.0-35.el9.x86_64 35/142 2026-03-10T08:32:02.068 INFO:teuthology.orchestra.run.vm06.stdout: Installing : luarocks-3.9.2-5.el9.noarch 36/142 2026-03-10T08:32:02.078 INFO:teuthology.orchestra.run.vm06.stdout: Installing : lua-devel-5.4.4-4.el9.x86_64 37/142 2026-03-10T08:32:02.115 INFO:teuthology.orchestra.run.vm06.stdout: Installing : protobuf-compiler-3.14.0-17.el9.x86_64 38/142 2026-03-10T08:32:02.190 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-mako-1.1.4-6.el9.noarch 39/142 2026-03-10T08:32:02.208 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-pyasn1-modules-0.4.8-7.el9.noarch 40/142 2026-03-10T08:32:02.219 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-rsa-4.9-2.el9.noarch 41/142 2026-03-10T08:32:02.224 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-tempora-5.0.0-2.el9.noarch 42/142 2026-03-10T08:32:02.232 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-portend-3.1.0-2.el9.noarch 43/142 2026-03-10T08:32:02.240 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-devel-3.9.25-3.el9.x86_64 77/142 2026-03-10T08:32:02.242 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-jaraco-classes-3.2.1-5.el9.noarch 44/142 2026-03-10T08:32:02.252 INFO:teuthology.orchestra.run.vm06.stdout: Installing : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 45/142 2026-03-10T08:32:02.258 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-zc-lockfile-2.0-10.el9.noarch 46/142 2026-03-10T08:32:02.272 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-babel-2.9.1-2.el9.noarch 78/142 2026-03-10T08:32:02.280 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-jinja2-2.11.3-8.el9.noarch 79/142 2026-03-10T08:32:02.282 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-xmltodict-0.12.0-15.el9.noarch 47/142 2026-03-10T08:32:02.318 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-websocket-client-1.2.3-2.el9.noarch 48/142 2026-03-10T08:32:02.329 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-webob-1.8.8-2.el9.noarch 49/142 2026-03-10T08:32:02.343 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-typing-extensions-4.15.0-1.el9.noarch 50/142 2026-03-10T08:32:02.346 INFO:teuthology.orchestra.run.vm03.stdout: Installing : openblas-0.3.29-1.el9.x86_64 80/142 2026-03-10T08:32:02.350 INFO:teuthology.orchestra.run.vm03.stdout: Installing : openblas-openmp-0.3.29-1.el9.x86_64 81/142 2026-03-10T08:32:02.360 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-repoze-lru-0.7-16.el9.noarch 51/142 2026-03-10T08:32:02.374 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-routes-2.5.1-5.el9.noarch 52/142 2026-03-10T08:32:02.376 INFO:teuthology.orchestra.run.vm03.stdout: Installing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 82/142 2026-03-10T08:32:02.387 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-natsort-7.1.1-5.el9.noarch 53/142 2026-03-10T08:32:02.456 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-logutils-0.3.5-21.el9.noarch 54/142 2026-03-10T08:32:02.466 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-pecan-1.4.2-3.el9.noarch 55/142 2026-03-10T08:32:02.477 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-certifi-2023.05.07-4.el9.noarch 56/142 2026-03-10T08:32:02.531 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-cachetools-4.2.4-1.el9.noarch 57/142 2026-03-10T08:32:02.783 INFO:teuthology.orchestra.run.vm03.stdout: Installing : flexiblas-netlib-3.0.4-9.el9.x86_64 83/142 2026-03-10T08:32:02.875 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-numpy-1:1.23.5-2.el9.x86_64 84/142 2026-03-10T08:32:02.948 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-google-auth-1:2.45.0-1.el9.noarch 58/142 2026-03-10T08:32:02.967 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-kubernetes-1:26.1.0-3.el9.noarch 59/142 2026-03-10T08:32:02.984 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-backports-tarfile-1.2.0-1.el9.noarch 60/142 2026-03-10T08:32:02.993 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-jaraco-context-6.0.1-3.el9.noarch 61/142 2026-03-10T08:32:03.003 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-autocommand-2.2.2-8.el9.noarch 62/142 2026-03-10T08:32:03.009 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-jaraco-text-4.0.0-2.el9.noarch 63/142 2026-03-10T08:32:03.017 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-jaraco-collections-3.0.0-8.el9.noarch 64/142 2026-03-10T08:32:03.030 INFO:teuthology.orchestra.run.vm06.stdout: Installing : libunwind-1.6.2-1.el9.x86_64 65/142 2026-03-10T08:32:03.036 INFO:teuthology.orchestra.run.vm06.stdout: Installing : gperftools-libs-2.9.1-3.el9.x86_64 66/142 2026-03-10T08:32:03.039 INFO:teuthology.orchestra.run.vm06.stdout: Installing : libarrow-doc-9.0.0-15.el9.noarch 67/142 2026-03-10T08:32:03.072 INFO:teuthology.orchestra.run.vm06.stdout: Installing : grpc-data-1.46.7-10.el9.noarch 68/142 2026-03-10T08:32:03.137 INFO:teuthology.orchestra.run.vm06.stdout: Installing : abseil-cpp-20211102.0-4.el9.x86_64 69/142 2026-03-10T08:32:03.152 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-grpcio-1.46.7-10.el9.x86_64 70/142 2026-03-10T08:32:03.215 INFO:teuthology.orchestra.run.vm06.stdout: Installing : socat-1.7.4.1-8.el9.x86_64 71/142 2026-03-10T08:32:03.257 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-py-1.10.0-6.el9.noarch 72/142 2026-03-10T08:32:03.273 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-protobuf-3.14.0-17.el9.noarch 73/142 2026-03-10T08:32:03.286 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-grpcio-tools-1.46.7-10.el9.x86_64 74/142 2026-03-10T08:32:03.292 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-pluggy-0.13.1-7.el9.noarch 75/142 2026-03-10T08:32:03.338 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-iniconfig-1.1.1-7.el9.noarch 76/142 2026-03-10T08:32:03.631 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-devel-3.9.25-3.el9.x86_64 77/142 2026-03-10T08:32:03.679 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-babel-2.9.1-2.el9.noarch 78/142 2026-03-10T08:32:03.687 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-jinja2-2.11.3-8.el9.noarch 79/142 2026-03-10T08:32:03.710 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 85/142 2026-03-10T08:32:03.741 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-scipy-1.9.3-2.el9.x86_64 86/142 2026-03-10T08:32:03.749 INFO:teuthology.orchestra.run.vm03.stdout: Installing : libxslt-1.1.34-12.el9.x86_64 87/142 2026-03-10T08:32:03.755 INFO:teuthology.orchestra.run.vm03.stdout: Installing : xmlstarlet-1.6.1-20.el9.x86_64 88/142 2026-03-10T08:32:03.763 INFO:teuthology.orchestra.run.vm06.stdout: Installing : openblas-0.3.29-1.el9.x86_64 80/142 2026-03-10T08:32:03.779 INFO:teuthology.orchestra.run.vm06.stdout: Installing : openblas-openmp-0.3.29-1.el9.x86_64 81/142 2026-03-10T08:32:03.806 INFO:teuthology.orchestra.run.vm06.stdout: Installing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 82/142 2026-03-10T08:32:03.919 INFO:teuthology.orchestra.run.vm03.stdout: Installing : libpmemobj-1.12.1-1.el9.x86_64 89/142 2026-03-10T08:32:03.923 INFO:teuthology.orchestra.run.vm03.stdout: Upgrading : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 90/142 2026-03-10T08:32:03.957 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 90/142 2026-03-10T08:32:03.961 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 91/142 2026-03-10T08:32:03.969 INFO:teuthology.orchestra.run.vm03.stdout: Installing : boost-program-options-1.75.0-13.el9.x86_64 92/142 2026-03-10T08:32:04.203 INFO:teuthology.orchestra.run.vm06.stdout: Installing : flexiblas-netlib-3.0.4-9.el9.x86_64 83/142 2026-03-10T08:32:04.226 INFO:teuthology.orchestra.run.vm03.stdout: Installing : parquet-libs-9.0.0-15.el9.x86_64 93/142 2026-03-10T08:32:04.233 INFO:teuthology.orchestra.run.vm03.stdout: Installing : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 94/142 2026-03-10T08:32:04.257 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 94/142 2026-03-10T08:32:04.259 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 95/142 2026-03-10T08:32:04.314 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-numpy-1:1.23.5-2.el9.x86_64 84/142 2026-03-10T08:32:05.123 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 85/142 2026-03-10T08:32:05.378 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 96/142 2026-03-10T08:32:05.647 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-scipy-1.9.3-2.el9.x86_64 86/142 2026-03-10T08:32:05.709 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 96/142 2026-03-10T08:32:05.716 INFO:teuthology.orchestra.run.vm06.stdout: Installing : libxslt-1.1.34-12.el9.x86_64 87/142 2026-03-10T08:32:05.723 INFO:teuthology.orchestra.run.vm06.stdout: Installing : xmlstarlet-1.6.1-20.el9.x86_64 88/142 2026-03-10T08:32:05.737 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 96/142 2026-03-10T08:32:05.760 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-ply-3.11-14.el9.noarch 97/142 2026-03-10T08:32:05.781 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-pycparser-2.20-6.el9.noarch 98/142 2026-03-10T08:32:05.875 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-cffi-1.14.5-5.el9.x86_64 99/142 2026-03-10T08:32:05.892 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-cryptography-36.0.1-5.el9.x86_64 100/142 2026-03-10T08:32:05.895 INFO:teuthology.orchestra.run.vm06.stdout: Installing : libpmemobj-1.12.1-1.el9.x86_64 89/142 2026-03-10T08:32:05.901 INFO:teuthology.orchestra.run.vm06.stdout: Upgrading : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 90/142 2026-03-10T08:32:05.922 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-pyOpenSSL-21.0.0-1.el9.noarch 101/142 2026-03-10T08:32:05.936 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 90/142 2026-03-10T08:32:05.941 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 91/142 2026-03-10T08:32:05.951 INFO:teuthology.orchestra.run.vm06.stdout: Installing : boost-program-options-1.75.0-13.el9.x86_64 92/142 2026-03-10T08:32:05.962 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-cheroot-10.0.1-4.el9.noarch 102/142 2026-03-10T08:32:06.023 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-cherrypy-18.6.1-2.el9.noarch 103/142 2026-03-10T08:32:06.033 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-asyncssh-2.13.2-5.el9.noarch 104/142 2026-03-10T08:32:06.039 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-bcrypt-3.2.2-1.el9.x86_64 105/142 2026-03-10T08:32:06.045 INFO:teuthology.orchestra.run.vm03.stdout: Installing : pciutils-3.7.0-7.el9.x86_64 106/142 2026-03-10T08:32:06.051 INFO:teuthology.orchestra.run.vm03.stdout: Installing : qatlib-25.08.0-2.el9.x86_64 107/142 2026-03-10T08:32:06.054 INFO:teuthology.orchestra.run.vm03.stdout: Installing : qatlib-service-25.08.0-2.el9.x86_64 108/142 2026-03-10T08:32:06.072 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 108/142 2026-03-10T08:32:06.235 INFO:teuthology.orchestra.run.vm06.stdout: Installing : parquet-libs-9.0.0-15.el9.x86_64 93/142 2026-03-10T08:32:06.238 INFO:teuthology.orchestra.run.vm06.stdout: Installing : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 94/142 2026-03-10T08:32:06.260 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 94/142 2026-03-10T08:32:06.263 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 95/142 2026-03-10T08:32:06.391 INFO:teuthology.orchestra.run.vm03.stdout: Installing : qatzip-libs-1.3.1-1.el9.x86_64 109/142 2026-03-10T08:32:06.398 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 110/142 2026-03-10T08:32:06.443 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 110/142 2026-03-10T08:32:06.443 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /usr/lib/systemd/system/ceph.target. 2026-03-10T08:32:06.443 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /usr/lib/systemd/system/ceph-crash.service. 2026-03-10T08:32:06.443 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:32:06.448 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 111/142 2026-03-10T08:32:07.474 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 96/142 2026-03-10T08:32:07.538 INFO:teuthology.orchestra.run.vm06.stdout: Installing : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 96/142 2026-03-10T08:32:07.565 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 96/142 2026-03-10T08:32:07.599 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-ply-3.11-14.el9.noarch 97/142 2026-03-10T08:32:07.623 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-pycparser-2.20-6.el9.noarch 98/142 2026-03-10T08:32:07.793 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-cffi-1.14.5-5.el9.x86_64 99/142 2026-03-10T08:32:07.810 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-cryptography-36.0.1-5.el9.x86_64 100/142 2026-03-10T08:32:07.847 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-pyOpenSSL-21.0.0-1.el9.noarch 101/142 2026-03-10T08:32:07.897 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-cheroot-10.0.1-4.el9.noarch 102/142 2026-03-10T08:32:07.974 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-cherrypy-18.6.1-2.el9.noarch 103/142 2026-03-10T08:32:07.987 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-asyncssh-2.13.2-5.el9.noarch 104/142 2026-03-10T08:32:07.994 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-bcrypt-3.2.2-1.el9.x86_64 105/142 2026-03-10T08:32:08.002 INFO:teuthology.orchestra.run.vm06.stdout: Installing : pciutils-3.7.0-7.el9.x86_64 106/142 2026-03-10T08:32:08.007 INFO:teuthology.orchestra.run.vm06.stdout: Installing : qatlib-25.08.0-2.el9.x86_64 107/142 2026-03-10T08:32:08.011 INFO:teuthology.orchestra.run.vm06.stdout: Installing : qatlib-service-25.08.0-2.el9.x86_64 108/142 2026-03-10T08:32:08.036 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 108/142 2026-03-10T08:32:08.395 INFO:teuthology.orchestra.run.vm06.stdout: Installing : qatzip-libs-1.3.1-1.el9.x86_64 109/142 2026-03-10T08:32:08.402 INFO:teuthology.orchestra.run.vm06.stdout: Installing : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 110/142 2026-03-10T08:32:08.456 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 110/142 2026-03-10T08:32:08.456 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /usr/lib/systemd/system/ceph.target. 2026-03-10T08:32:08.457 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /usr/lib/systemd/system/ceph-crash.service. 2026-03-10T08:32:08.457 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:32:08.470 INFO:teuthology.orchestra.run.vm06.stdout: Installing : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 111/142 2026-03-10T08:32:13.269 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 111/142 2026-03-10T08:32:13.269 INFO:teuthology.orchestra.run.vm03.stdout:skipping the directory /sys 2026-03-10T08:32:13.269 INFO:teuthology.orchestra.run.vm03.stdout:skipping the directory /proc 2026-03-10T08:32:13.269 INFO:teuthology.orchestra.run.vm03.stdout:skipping the directory /mnt 2026-03-10T08:32:13.269 INFO:teuthology.orchestra.run.vm03.stdout:skipping the directory /var/tmp 2026-03-10T08:32:13.269 INFO:teuthology.orchestra.run.vm03.stdout:skipping the directory /home 2026-03-10T08:32:13.269 INFO:teuthology.orchestra.run.vm03.stdout:skipping the directory /root 2026-03-10T08:32:13.269 INFO:teuthology.orchestra.run.vm03.stdout:skipping the directory /tmp 2026-03-10T08:32:13.269 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:32:13.397 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 112/142 2026-03-10T08:32:13.424 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 112/142 2026-03-10T08:32:13.424 INFO:teuthology.orchestra.run.vm03.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T08:32:13.424 INFO:teuthology.orchestra.run.vm03.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-10T08:32:13.424 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-10T08:32:13.424 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-10T08:32:13.424 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:32:13.661 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 113/142 2026-03-10T08:32:13.689 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 113/142 2026-03-10T08:32:13.689 INFO:teuthology.orchestra.run.vm03.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T08:32:13.689 INFO:teuthology.orchestra.run.vm03.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-10T08:32:13.689 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-10T08:32:13.689 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-10T08:32:13.689 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:32:13.699 INFO:teuthology.orchestra.run.vm03.stdout: Installing : mailcap-2.1.49-5.el9.noarch 114/142 2026-03-10T08:32:13.704 INFO:teuthology.orchestra.run.vm03.stdout: Installing : libconfig-1.7.2-9.el9.x86_64 115/142 2026-03-10T08:32:13.727 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 116/142 2026-03-10T08:32:13.727 INFO:teuthology.orchestra.run.vm03.stdout:Creating group 'qat' with GID 994. 2026-03-10T08:32:13.727 INFO:teuthology.orchestra.run.vm03.stdout:Creating group 'libstoragemgmt' with GID 993. 2026-03-10T08:32:13.727 INFO:teuthology.orchestra.run.vm03.stdout:Creating user 'libstoragemgmt' (daemon account for libstoragemgmt) with UID 993 and GID 993. 2026-03-10T08:32:13.727 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:32:13.740 INFO:teuthology.orchestra.run.vm03.stdout: Installing : libstoragemgmt-1.10.1-1.el9.x86_64 116/142 2026-03-10T08:32:13.772 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 116/142 2026-03-10T08:32:13.772 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/libstoragemgmt.service → /usr/lib/systemd/system/libstoragemgmt.service. 2026-03-10T08:32:13.773 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:32:13.820 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 117/142 2026-03-10T08:32:13.903 INFO:teuthology.orchestra.run.vm03.stdout: Installing : cryptsetup-2.8.1-3.el9.x86_64 118/142 2026-03-10T08:32:13.909 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 119/142 2026-03-10T08:32:13.925 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 119/142 2026-03-10T08:32:13.925 INFO:teuthology.orchestra.run.vm03.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T08:32:13.925 INFO:teuthology.orchestra.run.vm03.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-10T08:32:13.925 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:32:14.811 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 120/142 2026-03-10T08:32:14.839 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 120/142 2026-03-10T08:32:14.839 INFO:teuthology.orchestra.run.vm03.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T08:32:14.839 INFO:teuthology.orchestra.run.vm03.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-10T08:32:14.839 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-10T08:32:14.839 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-10T08:32:14.839 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:32:15.224 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 121/142 2026-03-10T08:32:15.228 INFO:teuthology.orchestra.run.vm03.stdout: Installing : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 121/142 2026-03-10T08:32:15.236 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 122/142 2026-03-10T08:32:15.261 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 123/142 2026-03-10T08:32:15.264 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 124/142 2026-03-10T08:32:15.799 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 111/142 2026-03-10T08:32:15.799 INFO:teuthology.orchestra.run.vm06.stdout:skipping the directory /sys 2026-03-10T08:32:15.799 INFO:teuthology.orchestra.run.vm06.stdout:skipping the directory /proc 2026-03-10T08:32:15.799 INFO:teuthology.orchestra.run.vm06.stdout:skipping the directory /mnt 2026-03-10T08:32:15.799 INFO:teuthology.orchestra.run.vm06.stdout:skipping the directory /var/tmp 2026-03-10T08:32:15.799 INFO:teuthology.orchestra.run.vm06.stdout:skipping the directory /home 2026-03-10T08:32:15.799 INFO:teuthology.orchestra.run.vm06.stdout:skipping the directory /root 2026-03-10T08:32:15.799 INFO:teuthology.orchestra.run.vm06.stdout:skipping the directory /tmp 2026-03-10T08:32:15.799 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:32:15.858 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 124/142 2026-03-10T08:32:15.865 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 125/142 2026-03-10T08:32:15.942 INFO:teuthology.orchestra.run.vm06.stdout: Installing : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 112/142 2026-03-10T08:32:15.972 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 112/142 2026-03-10T08:32:15.972 INFO:teuthology.orchestra.run.vm06.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T08:32:15.972 INFO:teuthology.orchestra.run.vm06.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-10T08:32:15.972 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-10T08:32:15.972 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-10T08:32:15.972 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:32:16.240 INFO:teuthology.orchestra.run.vm06.stdout: Installing : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 113/142 2026-03-10T08:32:16.266 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 113/142 2026-03-10T08:32:16.266 INFO:teuthology.orchestra.run.vm06.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T08:32:16.266 INFO:teuthology.orchestra.run.vm06.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-10T08:32:16.266 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-10T08:32:16.266 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-10T08:32:16.266 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:32:16.278 INFO:teuthology.orchestra.run.vm06.stdout: Installing : mailcap-2.1.49-5.el9.noarch 114/142 2026-03-10T08:32:16.282 INFO:teuthology.orchestra.run.vm06.stdout: Installing : libconfig-1.7.2-9.el9.x86_64 115/142 2026-03-10T08:32:16.302 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 116/142 2026-03-10T08:32:16.302 INFO:teuthology.orchestra.run.vm06.stdout:Creating group 'qat' with GID 994. 2026-03-10T08:32:16.302 INFO:teuthology.orchestra.run.vm06.stdout:Creating group 'libstoragemgmt' with GID 993. 2026-03-10T08:32:16.302 INFO:teuthology.orchestra.run.vm06.stdout:Creating user 'libstoragemgmt' (daemon account for libstoragemgmt) with UID 993 and GID 993. 2026-03-10T08:32:16.302 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:32:16.314 INFO:teuthology.orchestra.run.vm06.stdout: Installing : libstoragemgmt-1.10.1-1.el9.x86_64 116/142 2026-03-10T08:32:16.346 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 116/142 2026-03-10T08:32:16.346 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/libstoragemgmt.service → /usr/lib/systemd/system/libstoragemgmt.service. 2026-03-10T08:32:16.346 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:32:16.405 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 117/142 2026-03-10T08:32:16.406 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 125/142 2026-03-10T08:32:16.408 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 126/142 2026-03-10T08:32:16.474 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 126/142 2026-03-10T08:32:16.499 INFO:teuthology.orchestra.run.vm06.stdout: Installing : cryptsetup-2.8.1-3.el9.x86_64 118/142 2026-03-10T08:32:16.503 INFO:teuthology.orchestra.run.vm06.stdout: Installing : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 119/142 2026-03-10T08:32:16.518 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 119/142 2026-03-10T08:32:16.518 INFO:teuthology.orchestra.run.vm06.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T08:32:16.518 INFO:teuthology.orchestra.run.vm06.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-10T08:32:16.518 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:32:16.532 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 127/142 2026-03-10T08:32:16.535 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 128/142 2026-03-10T08:32:16.557 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 128/142 2026-03-10T08:32:16.557 INFO:teuthology.orchestra.run.vm03.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T08:32:16.557 INFO:teuthology.orchestra.run.vm03.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-10T08:32:16.557 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-10T08:32:16.557 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-10T08:32:16.557 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:32:16.572 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 129/142 2026-03-10T08:32:16.585 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 129/142 2026-03-10T08:32:17.118 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 130/142 2026-03-10T08:32:17.122 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 131/142 2026-03-10T08:32:17.146 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 131/142 2026-03-10T08:32:17.146 INFO:teuthology.orchestra.run.vm03.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T08:32:17.146 INFO:teuthology.orchestra.run.vm03.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-10T08:32:17.146 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-10T08:32:17.146 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-10T08:32:17.147 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:32:17.161 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 132/142 2026-03-10T08:32:17.183 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 132/142 2026-03-10T08:32:17.183 INFO:teuthology.orchestra.run.vm03.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T08:32:17.183 INFO:teuthology.orchestra.run.vm03.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-10T08:32:17.183 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:32:17.335 INFO:teuthology.orchestra.run.vm06.stdout: Installing : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 120/142 2026-03-10T08:32:17.347 INFO:teuthology.orchestra.run.vm03.stdout: Installing : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 133/142 2026-03-10T08:32:17.360 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 120/142 2026-03-10T08:32:17.360 INFO:teuthology.orchestra.run.vm06.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T08:32:17.360 INFO:teuthology.orchestra.run.vm06.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-10T08:32:17.361 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-10T08:32:17.361 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-10T08:32:17.361 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:32:17.372 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 133/142 2026-03-10T08:32:17.372 INFO:teuthology.orchestra.run.vm03.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T08:32:17.372 INFO:teuthology.orchestra.run.vm03.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-10T08:32:17.372 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-10T08:32:17.372 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-10T08:32:17.372 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:32:17.425 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 121/142 2026-03-10T08:32:17.429 INFO:teuthology.orchestra.run.vm06.stdout: Installing : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 121/142 2026-03-10T08:32:17.435 INFO:teuthology.orchestra.run.vm06.stdout: Installing : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 122/142 2026-03-10T08:32:17.460 INFO:teuthology.orchestra.run.vm06.stdout: Installing : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 123/142 2026-03-10T08:32:17.463 INFO:teuthology.orchestra.run.vm06.stdout: Installing : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 124/142 2026-03-10T08:32:18.020 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 124/142 2026-03-10T08:32:18.026 INFO:teuthology.orchestra.run.vm06.stdout: Installing : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 125/142 2026-03-10T08:32:18.560 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 125/142 2026-03-10T08:32:18.562 INFO:teuthology.orchestra.run.vm06.stdout: Installing : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 126/142 2026-03-10T08:32:18.623 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 126/142 2026-03-10T08:32:18.679 INFO:teuthology.orchestra.run.vm06.stdout: Installing : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 127/142 2026-03-10T08:32:18.682 INFO:teuthology.orchestra.run.vm06.stdout: Installing : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 128/142 2026-03-10T08:32:18.704 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 128/142 2026-03-10T08:32:18.704 INFO:teuthology.orchestra.run.vm06.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T08:32:18.704 INFO:teuthology.orchestra.run.vm06.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-10T08:32:18.704 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-10T08:32:18.704 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-10T08:32:18.704 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:32:18.719 INFO:teuthology.orchestra.run.vm06.stdout: Installing : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 129/142 2026-03-10T08:32:18.730 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 129/142 2026-03-10T08:32:19.254 INFO:teuthology.orchestra.run.vm06.stdout: Installing : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 130/142 2026-03-10T08:32:19.258 INFO:teuthology.orchestra.run.vm06.stdout: Installing : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 131/142 2026-03-10T08:32:19.280 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 131/142 2026-03-10T08:32:19.280 INFO:teuthology.orchestra.run.vm06.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T08:32:19.280 INFO:teuthology.orchestra.run.vm06.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-10T08:32:19.281 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-10T08:32:19.281 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-10T08:32:19.281 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:32:19.293 INFO:teuthology.orchestra.run.vm06.stdout: Installing : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 132/142 2026-03-10T08:32:19.315 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 132/142 2026-03-10T08:32:19.315 INFO:teuthology.orchestra.run.vm06.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T08:32:19.315 INFO:teuthology.orchestra.run.vm06.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-10T08:32:19.315 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:32:19.479 INFO:teuthology.orchestra.run.vm06.stdout: Installing : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 133/142 2026-03-10T08:32:19.501 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 133/142 2026-03-10T08:32:19.501 INFO:teuthology.orchestra.run.vm06.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T08:32:19.501 INFO:teuthology.orchestra.run.vm06.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-10T08:32:19.501 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-10T08:32:19.501 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-10T08:32:19.501 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:32:19.967 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 134/142 2026-03-10T08:32:19.978 INFO:teuthology.orchestra.run.vm03.stdout: Installing : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 135/142 2026-03-10T08:32:20.029 INFO:teuthology.orchestra.run.vm03.stdout: Installing : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 136/142 2026-03-10T08:32:20.037 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-pytest-6.2.2-7.el9.noarch 137/142 2026-03-10T08:32:20.092 INFO:teuthology.orchestra.run.vm03.stdout: Installing : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_6 138/142 2026-03-10T08:32:20.102 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 139/142 2026-03-10T08:32:20.106 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-jmespath-1.0.1-1.el9.noarch 140/142 2026-03-10T08:32:20.106 INFO:teuthology.orchestra.run.vm03.stdout: Cleanup : librbd1-2:16.2.4-5.el9.x86_64 141/142 2026-03-10T08:32:20.122 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: librbd1-2:16.2.4-5.el9.x86_64 141/142 2026-03-10T08:32:20.122 INFO:teuthology.orchestra.run.vm03.stdout: Cleanup : librados2-2:16.2.4-5.el9.x86_64 142/142 2026-03-10T08:32:21.579 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: librados2-2:16.2.4-5.el9.x86_64 142/142 2026-03-10T08:32:21.579 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/142 2026-03-10T08:32:21.579 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2/142 2026-03-10T08:32:21.579 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 3/142 2026-03-10T08:32:21.579 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 4/142 2026-03-10T08:32:21.579 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 5/142 2026-03-10T08:32:21.579 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 6/142 2026-03-10T08:32:21.580 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 7/142 2026-03-10T08:32:21.580 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/142 2026-03-10T08:32:21.580 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 9/142 2026-03-10T08:32:21.580 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 10/142 2026-03-10T08:32:21.580 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 11/142 2026-03-10T08:32:21.580 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 12/142 2026-03-10T08:32:21.580 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_6 13/142 2026-03-10T08:32:21.580 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 14/142 2026-03-10T08:32:21.580 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 15/142 2026-03-10T08:32:21.580 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 16/142 2026-03-10T08:32:21.580 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 17/142 2026-03-10T08:32:21.580 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 18/142 2026-03-10T08:32:21.580 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9 19/142 2026-03-10T08:32:21.580 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 20/142 2026-03-10T08:32:21.580 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 21/142 2026-03-10T08:32:21.580 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 22/142 2026-03-10T08:32:21.580 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 23/142 2026-03-10T08:32:21.580 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 24/142 2026-03-10T08:32:21.580 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 25/142 2026-03-10T08:32:21.580 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 26/142 2026-03-10T08:32:21.580 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 27/142 2026-03-10T08:32:21.580 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 28/142 2026-03-10T08:32:21.580 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 29/142 2026-03-10T08:32:21.580 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 30/142 2026-03-10T08:32:21.580 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 31/142 2026-03-10T08:32:21.580 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 32/142 2026-03-10T08:32:21.580 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 33/142 2026-03-10T08:32:21.580 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 34/142 2026-03-10T08:32:21.580 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 35/142 2026-03-10T08:32:21.580 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 36/142 2026-03-10T08:32:21.582 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : cryptsetup-2.8.1-3.el9.x86_64 37/142 2026-03-10T08:32:21.582 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 38/142 2026-03-10T08:32:21.582 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 39/142 2026-03-10T08:32:21.582 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 40/142 2026-03-10T08:32:21.582 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 41/142 2026-03-10T08:32:21.582 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 42/142 2026-03-10T08:32:21.582 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 43/142 2026-03-10T08:32:21.582 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 44/142 2026-03-10T08:32:21.582 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 45/142 2026-03-10T08:32:21.582 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-ply-3.11-14.el9.noarch 46/142 2026-03-10T08:32:21.582 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 47/142 2026-03-10T08:32:21.582 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 48/142 2026-03-10T08:32:21.582 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 49/142 2026-03-10T08:32:21.582 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : unzip-6.0-59.el9.x86_64 50/142 2026-03-10T08:32:21.582 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : zip-3.0-35.el9.x86_64 51/142 2026-03-10T08:32:21.582 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 52/142 2026-03-10T08:32:21.582 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 53/142 2026-03-10T08:32:21.582 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 54/142 2026-03-10T08:32:21.582 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 55/142 2026-03-10T08:32:21.582 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 56/142 2026-03-10T08:32:21.582 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 57/142 2026-03-10T08:32:21.582 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 58/142 2026-03-10T08:32:21.582 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 59/142 2026-03-10T08:32:21.582 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 60/142 2026-03-10T08:32:21.582 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 61/142 2026-03-10T08:32:21.582 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 62/142 2026-03-10T08:32:21.582 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : lua-5.4.4-4.el9.x86_64 63/142 2026-03-10T08:32:21.582 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 64/142 2026-03-10T08:32:21.583 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 65/142 2026-03-10T08:32:21.583 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : protobuf-3.14.0-17.el9.x86_64 66/142 2026-03-10T08:32:21.583 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 67/142 2026-03-10T08:32:21.583 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 68/142 2026-03-10T08:32:21.583 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-iniconfig-1.1.1-7.el9.noarch 69/142 2026-03-10T08:32:21.583 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 70/142 2026-03-10T08:32:21.583 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jmespath-1.0.1-1.el9.noarch 71/142 2026-03-10T08:32:21.583 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 72/142 2026-03-10T08:32:21.583 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 73/142 2026-03-10T08:32:21.583 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 74/142 2026-03-10T08:32:21.583 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 75/142 2026-03-10T08:32:21.583 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 76/142 2026-03-10T08:32:21.583 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-packaging-20.9-5.el9.noarch 77/142 2026-03-10T08:32:21.583 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-pluggy-0.13.1-7.el9.noarch 78/142 2026-03-10T08:32:21.583 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-protobuf-3.14.0-17.el9.noarch 79/142 2026-03-10T08:32:21.583 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-py-1.10.0-6.el9.noarch 80/142 2026-03-10T08:32:21.583 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 81/142 2026-03-10T08:32:21.583 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 82/142 2026-03-10T08:32:21.583 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-pytest-6.2.2-7.el9.noarch 83/142 2026-03-10T08:32:21.583 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 84/142 2026-03-10T08:32:21.583 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 85/142 2026-03-10T08:32:21.583 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 86/142 2026-03-10T08:32:21.583 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : qatlib-25.08.0-2.el9.x86_64 87/142 2026-03-10T08:32:21.583 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : qatlib-service-25.08.0-2.el9.x86_64 88/142 2026-03-10T08:32:21.583 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 89/142 2026-03-10T08:32:21.583 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 90/142 2026-03-10T08:32:21.583 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 91/142 2026-03-10T08:32:21.583 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 92/142 2026-03-10T08:32:21.584 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : protobuf-compiler-3.14.0-17.el9.x86_64 93/142 2026-03-10T08:32:21.584 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 94/142 2026-03-10T08:32:21.584 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 95/142 2026-03-10T08:32:21.584 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 96/142 2026-03-10T08:32:21.584 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 97/142 2026-03-10T08:32:21.584 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 98/142 2026-03-10T08:32:21.584 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 99/142 2026-03-10T08:32:21.584 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 100/142 2026-03-10T08:32:21.584 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 101/142 2026-03-10T08:32:21.584 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 102/142 2026-03-10T08:32:21.584 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 103/142 2026-03-10T08:32:21.584 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 104/142 2026-03-10T08:32:21.584 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 105/142 2026-03-10T08:32:21.584 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 106/142 2026-03-10T08:32:21.584 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 107/142 2026-03-10T08:32:21.584 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 108/142 2026-03-10T08:32:21.584 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 109/142 2026-03-10T08:32:21.584 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 110/142 2026-03-10T08:32:21.584 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 111/142 2026-03-10T08:32:21.584 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 112/142 2026-03-10T08:32:21.584 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 113/142 2026-03-10T08:32:21.584 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 114/142 2026-03-10T08:32:21.584 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 115/142 2026-03-10T08:32:21.584 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 116/142 2026-03-10T08:32:21.584 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 117/142 2026-03-10T08:32:21.584 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 118/142 2026-03-10T08:32:21.584 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 119/142 2026-03-10T08:32:21.584 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 120/142 2026-03-10T08:32:21.585 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 121/142 2026-03-10T08:32:21.585 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 122/142 2026-03-10T08:32:21.585 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 123/142 2026-03-10T08:32:21.585 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 124/142 2026-03-10T08:32:21.585 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 125/142 2026-03-10T08:32:21.585 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 126/142 2026-03-10T08:32:21.585 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 127/142 2026-03-10T08:32:21.585 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 128/142 2026-03-10T08:32:21.585 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 129/142 2026-03-10T08:32:21.585 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 130/142 2026-03-10T08:32:21.585 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 131/142 2026-03-10T08:32:21.585 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 132/142 2026-03-10T08:32:21.585 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 133/142 2026-03-10T08:32:21.585 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 134/142 2026-03-10T08:32:21.585 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-xmltodict-0.12.0-15.el9.noarch 135/142 2026-03-10T08:32:21.585 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 136/142 2026-03-10T08:32:21.585 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : re2-1:20211101-20.el9.x86_64 137/142 2026-03-10T08:32:21.585 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 138/142 2026-03-10T08:32:21.585 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 139/142 2026-03-10T08:32:21.585 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : librados2-2:16.2.4-5.el9.x86_64 140/142 2026-03-10T08:32:21.585 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 141/142 2026-03-10T08:32:21.713 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : librbd1-2:16.2.4-5.el9.x86_64 142/142 2026-03-10T08:32:21.713 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:32:21.713 INFO:teuthology.orchestra.run.vm03.stdout:Upgraded: 2026-03-10T08:32:21.713 INFO:teuthology.orchestra.run.vm03.stdout: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:32:21.713 INFO:teuthology.orchestra.run.vm03.stdout: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:32:21.713 INFO:teuthology.orchestra.run.vm03.stdout:Installed: 2026-03-10T08:32:21.713 INFO:teuthology.orchestra.run.vm03.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-03-10T08:32:21.713 INFO:teuthology.orchestra.run.vm03.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-10T08:32:21.713 INFO:teuthology.orchestra.run.vm03.stdout: ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:32:21.713 INFO:teuthology.orchestra.run.vm03.stdout: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:32:21.713 INFO:teuthology.orchestra.run.vm03.stdout: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:32:21.713 INFO:teuthology.orchestra.run.vm03.stdout: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:32:21.713 INFO:teuthology.orchestra.run.vm03.stdout: ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T08:32:21.713 INFO:teuthology.orchestra.run.vm03.stdout: ceph-immutable-object-cache-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:32:21.713 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:32:21.713 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:32:21.713 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T08:32:21.713 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T08:32:21.713 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T08:32:21.714 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T08:32:21.714 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T08:32:21.714 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:32:21.714 INFO:teuthology.orchestra.run.vm03.stdout: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:32:21.714 INFO:teuthology.orchestra.run.vm03.stdout: ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T08:32:21.714 INFO:teuthology.orchestra.run.vm03.stdout: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:32:21.714 INFO:teuthology.orchestra.run.vm03.stdout: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:32:21.714 INFO:teuthology.orchestra.run.vm03.stdout: ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:32:21.714 INFO:teuthology.orchestra.run.vm03.stdout: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T08:32:21.714 INFO:teuthology.orchestra.run.vm03.stdout: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T08:32:21.714 INFO:teuthology.orchestra.run.vm03.stdout: cryptsetup-2.8.1-3.el9.x86_64 2026-03-10T08:32:21.714 INFO:teuthology.orchestra.run.vm03.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-10T08:32:21.714 INFO:teuthology.orchestra.run.vm03.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-10T08:32:21.714 INFO:teuthology.orchestra.run.vm03.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-10T08:32:21.714 INFO:teuthology.orchestra.run.vm03.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-10T08:32:21.714 INFO:teuthology.orchestra.run.vm03.stdout: grpc-data-1.46.7-10.el9.noarch 2026-03-10T08:32:21.714 INFO:teuthology.orchestra.run.vm03.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-10T08:32:21.714 INFO:teuthology.orchestra.run.vm03.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-10T08:32:21.714 INFO:teuthology.orchestra.run.vm03.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-10T08:32:21.714 INFO:teuthology.orchestra.run.vm03.stdout: libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:32:21.714 INFO:teuthology.orchestra.run.vm03.stdout: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:32:21.714 INFO:teuthology.orchestra.run.vm03.stdout: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:32:21.714 INFO:teuthology.orchestra.run.vm03.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-10T08:32:21.714 INFO:teuthology.orchestra.run.vm03.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-10T08:32:21.714 INFO:teuthology.orchestra.run.vm03.stdout: libnbd-1.20.3-4.el9.x86_64 2026-03-10T08:32:21.714 INFO:teuthology.orchestra.run.vm03.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-10T08:32:21.714 INFO:teuthology.orchestra.run.vm03.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-10T08:32:21.714 INFO:teuthology.orchestra.run.vm03.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-10T08:32:21.714 INFO:teuthology.orchestra.run.vm03.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-10T08:32:21.714 INFO:teuthology.orchestra.run.vm03.stdout: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:32:21.714 INFO:teuthology.orchestra.run.vm03.stdout: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:32:21.714 INFO:teuthology.orchestra.run.vm03.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-10T08:32:21.714 INFO:teuthology.orchestra.run.vm03.stdout: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:32:21.714 INFO:teuthology.orchestra.run.vm03.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T08:32:21.714 INFO:teuthology.orchestra.run.vm03.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-10T08:32:21.714 INFO:teuthology.orchestra.run.vm03.stdout: libxslt-1.1.34-12.el9.x86_64 2026-03-10T08:32:21.714 INFO:teuthology.orchestra.run.vm03.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-10T08:32:21.714 INFO:teuthology.orchestra.run.vm03.stdout: lua-5.4.4-4.el9.x86_64 2026-03-10T08:32:21.714 INFO:teuthology.orchestra.run.vm03.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-10T08:32:21.714 INFO:teuthology.orchestra.run.vm03.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-10T08:32:21.714 INFO:teuthology.orchestra.run.vm03.stdout: mailcap-2.1.49-5.el9.noarch 2026-03-10T08:32:21.714 INFO:teuthology.orchestra.run.vm03.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-10T08:32:21.715 INFO:teuthology.orchestra.run.vm03.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-10T08:32:21.715 INFO:teuthology.orchestra.run.vm03.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-10T08:32:21.715 INFO:teuthology.orchestra.run.vm03.stdout: pciutils-3.7.0-7.el9.x86_64 2026-03-10T08:32:21.715 INFO:teuthology.orchestra.run.vm03.stdout: protobuf-3.14.0-17.el9.x86_64 2026-03-10T08:32:21.715 INFO:teuthology.orchestra.run.vm03.stdout: protobuf-compiler-3.14.0-17.el9.x86_64 2026-03-10T08:32:21.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-10T08:32:21.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-10T08:32:21.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-10T08:32:21.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-10T08:32:21.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-10T08:32:21.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-10T08:32:21.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:32:21.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:32:21.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:32:21.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-10T08:32:21.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-10T08:32:21.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-10T08:32:21.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-10T08:32:21.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-10T08:32:21.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-10T08:32:21.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-10T08:32:21.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-03-10T08:32:21.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-03-10T08:32:21.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-iniconfig-1.1.1-7.el9.noarch 2026-03-10T08:32:21.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-10T08:32:21.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-10T08:32:21.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-10T08:32:21.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-10T08:32:21.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-10T08:32:21.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-10T08:32:21.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-10T08:32:21.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-jmespath-1.0.1-1.el9.noarch 2026-03-10T08:32:21.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-10T08:32:21.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T08:32:21.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-10T08:32:21.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-10T08:32:21.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-10T08:32:21.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-10T08:32:21.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-10T08:32:21.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-10T08:32:21.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-10T08:32:21.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-packaging-20.9-5.el9.noarch 2026-03-10T08:32:21.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-10T08:32:21.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-pluggy-0.13.1-7.el9.noarch 2026-03-10T08:32:21.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-ply-3.11-14.el9.noarch 2026-03-10T08:32:21.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-10T08:32:21.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-protobuf-3.14.0-17.el9.noarch 2026-03-10T08:32:21.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-py-1.10.0-6.el9.noarch 2026-03-10T08:32:21.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-10T08:32:21.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-10T08:32:21.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-10T08:32:21.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-10T08:32:21.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-pytest-6.2.2-7.el9.noarch 2026-03-10T08:32:21.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:32:21.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:32:21.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-10T08:32:21.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-10T08:32:21.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-10T08:32:21.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:32:21.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-10T08:32:21.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-10T08:32:21.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-10T08:32:21.716 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-10T08:32:21.716 INFO:teuthology.orchestra.run.vm03.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-10T08:32:21.716 INFO:teuthology.orchestra.run.vm03.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-10T08:32:21.716 INFO:teuthology.orchestra.run.vm03.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-10T08:32:21.716 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-10T08:32:21.716 INFO:teuthology.orchestra.run.vm03.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-10T08:32:21.716 INFO:teuthology.orchestra.run.vm03.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-10T08:32:21.716 INFO:teuthology.orchestra.run.vm03.stdout: python3-xmltodict-0.12.0-15.el9.noarch 2026-03-10T08:32:21.716 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-10T08:32:21.716 INFO:teuthology.orchestra.run.vm03.stdout: qatlib-25.08.0-2.el9.x86_64 2026-03-10T08:32:21.716 INFO:teuthology.orchestra.run.vm03.stdout: qatlib-service-25.08.0-2.el9.x86_64 2026-03-10T08:32:21.716 INFO:teuthology.orchestra.run.vm03.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-03-10T08:32:21.716 INFO:teuthology.orchestra.run.vm03.stdout: rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:32:21.716 INFO:teuthology.orchestra.run.vm03.stdout: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:32:21.716 INFO:teuthology.orchestra.run.vm03.stdout: rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:32:21.716 INFO:teuthology.orchestra.run.vm03.stdout: re2-1:20211101-20.el9.x86_64 2026-03-10T08:32:21.716 INFO:teuthology.orchestra.run.vm03.stdout: socat-1.7.4.1-8.el9.x86_64 2026-03-10T08:32:21.716 INFO:teuthology.orchestra.run.vm03.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-10T08:32:21.716 INFO:teuthology.orchestra.run.vm03.stdout: unzip-6.0-59.el9.x86_64 2026-03-10T08:32:21.716 INFO:teuthology.orchestra.run.vm03.stdout: xmlstarlet-1.6.1-20.el9.x86_64 2026-03-10T08:32:21.716 INFO:teuthology.orchestra.run.vm03.stdout: zip-3.0-35.el9.x86_64 2026-03-10T08:32:21.716 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:32:21.716 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-10T08:32:21.810 DEBUG:teuthology.parallel:result is None 2026-03-10T08:32:22.116 INFO:teuthology.orchestra.run.vm06.stdout: Installing : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 134/142 2026-03-10T08:32:22.127 INFO:teuthology.orchestra.run.vm06.stdout: Installing : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 135/142 2026-03-10T08:32:22.179 INFO:teuthology.orchestra.run.vm06.stdout: Installing : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 136/142 2026-03-10T08:32:22.187 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-pytest-6.2.2-7.el9.noarch 137/142 2026-03-10T08:32:22.243 INFO:teuthology.orchestra.run.vm06.stdout: Installing : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_6 138/142 2026-03-10T08:32:22.253 INFO:teuthology.orchestra.run.vm06.stdout: Installing : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 139/142 2026-03-10T08:32:22.257 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-jmespath-1.0.1-1.el9.noarch 140/142 2026-03-10T08:32:22.258 INFO:teuthology.orchestra.run.vm06.stdout: Cleanup : librbd1-2:16.2.4-5.el9.x86_64 141/142 2026-03-10T08:32:22.275 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: librbd1-2:16.2.4-5.el9.x86_64 141/142 2026-03-10T08:32:22.275 INFO:teuthology.orchestra.run.vm06.stdout: Cleanup : librados2-2:16.2.4-5.el9.x86_64 142/142 2026-03-10T08:32:23.749 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: librados2-2:16.2.4-5.el9.x86_64 142/142 2026-03-10T08:32:23.749 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/142 2026-03-10T08:32:23.749 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2/142 2026-03-10T08:32:23.749 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 3/142 2026-03-10T08:32:23.749 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 4/142 2026-03-10T08:32:23.749 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 5/142 2026-03-10T08:32:23.749 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 6/142 2026-03-10T08:32:23.749 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 7/142 2026-03-10T08:32:23.749 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/142 2026-03-10T08:32:23.749 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 9/142 2026-03-10T08:32:23.749 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 10/142 2026-03-10T08:32:23.749 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 11/142 2026-03-10T08:32:23.749 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 12/142 2026-03-10T08:32:23.750 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_6 13/142 2026-03-10T08:32:23.750 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 14/142 2026-03-10T08:32:23.750 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 15/142 2026-03-10T08:32:23.750 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 16/142 2026-03-10T08:32:23.750 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 17/142 2026-03-10T08:32:23.750 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 18/142 2026-03-10T08:32:23.750 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9 19/142 2026-03-10T08:32:23.750 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 20/142 2026-03-10T08:32:23.750 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 21/142 2026-03-10T08:32:23.750 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 22/142 2026-03-10T08:32:23.750 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 23/142 2026-03-10T08:32:23.750 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 24/142 2026-03-10T08:32:23.754 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 25/142 2026-03-10T08:32:23.754 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 26/142 2026-03-10T08:32:23.754 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 27/142 2026-03-10T08:32:23.754 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 28/142 2026-03-10T08:32:23.754 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 29/142 2026-03-10T08:32:23.754 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 30/142 2026-03-10T08:32:23.754 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 31/142 2026-03-10T08:32:23.754 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 32/142 2026-03-10T08:32:23.754 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 33/142 2026-03-10T08:32:23.754 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 34/142 2026-03-10T08:32:23.754 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 35/142 2026-03-10T08:32:23.754 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 36/142 2026-03-10T08:32:23.754 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : cryptsetup-2.8.1-3.el9.x86_64 37/142 2026-03-10T08:32:23.754 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 38/142 2026-03-10T08:32:23.754 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 39/142 2026-03-10T08:32:23.754 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 40/142 2026-03-10T08:32:23.754 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 41/142 2026-03-10T08:32:23.754 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 42/142 2026-03-10T08:32:23.754 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 43/142 2026-03-10T08:32:23.754 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 44/142 2026-03-10T08:32:23.754 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 45/142 2026-03-10T08:32:23.754 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-ply-3.11-14.el9.noarch 46/142 2026-03-10T08:32:23.755 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 47/142 2026-03-10T08:32:23.755 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 48/142 2026-03-10T08:32:23.755 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 49/142 2026-03-10T08:32:23.755 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : unzip-6.0-59.el9.x86_64 50/142 2026-03-10T08:32:23.755 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : zip-3.0-35.el9.x86_64 51/142 2026-03-10T08:32:23.755 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 52/142 2026-03-10T08:32:23.755 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 53/142 2026-03-10T08:32:23.755 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 54/142 2026-03-10T08:32:23.755 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 55/142 2026-03-10T08:32:23.755 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 56/142 2026-03-10T08:32:23.755 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 57/142 2026-03-10T08:32:23.755 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 58/142 2026-03-10T08:32:23.755 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 59/142 2026-03-10T08:32:23.755 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 60/142 2026-03-10T08:32:23.755 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 61/142 2026-03-10T08:32:23.755 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 62/142 2026-03-10T08:32:23.755 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : lua-5.4.4-4.el9.x86_64 63/142 2026-03-10T08:32:23.755 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 64/142 2026-03-10T08:32:23.755 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 65/142 2026-03-10T08:32:23.755 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : protobuf-3.14.0-17.el9.x86_64 66/142 2026-03-10T08:32:23.755 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 67/142 2026-03-10T08:32:23.755 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 68/142 2026-03-10T08:32:23.756 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-iniconfig-1.1.1-7.el9.noarch 69/142 2026-03-10T08:32:23.756 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 70/142 2026-03-10T08:32:23.756 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-jmespath-1.0.1-1.el9.noarch 71/142 2026-03-10T08:32:23.756 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 72/142 2026-03-10T08:32:23.756 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 73/142 2026-03-10T08:32:23.756 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 74/142 2026-03-10T08:32:23.756 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 75/142 2026-03-10T08:32:23.756 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 76/142 2026-03-10T08:32:23.756 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-packaging-20.9-5.el9.noarch 77/142 2026-03-10T08:32:23.756 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-pluggy-0.13.1-7.el9.noarch 78/142 2026-03-10T08:32:23.756 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-protobuf-3.14.0-17.el9.noarch 79/142 2026-03-10T08:32:23.756 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-py-1.10.0-6.el9.noarch 80/142 2026-03-10T08:32:23.756 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 81/142 2026-03-10T08:32:23.756 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 82/142 2026-03-10T08:32:23.756 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-pytest-6.2.2-7.el9.noarch 83/142 2026-03-10T08:32:23.756 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 84/142 2026-03-10T08:32:23.756 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 85/142 2026-03-10T08:32:23.756 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 86/142 2026-03-10T08:32:23.756 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : qatlib-25.08.0-2.el9.x86_64 87/142 2026-03-10T08:32:23.756 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : qatlib-service-25.08.0-2.el9.x86_64 88/142 2026-03-10T08:32:23.757 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 89/142 2026-03-10T08:32:23.757 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 90/142 2026-03-10T08:32:23.757 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 91/142 2026-03-10T08:32:23.757 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 92/142 2026-03-10T08:32:23.757 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : protobuf-compiler-3.14.0-17.el9.x86_64 93/142 2026-03-10T08:32:23.757 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 94/142 2026-03-10T08:32:23.757 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 95/142 2026-03-10T08:32:23.757 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 96/142 2026-03-10T08:32:23.757 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 97/142 2026-03-10T08:32:23.757 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 98/142 2026-03-10T08:32:23.757 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 99/142 2026-03-10T08:32:23.757 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 100/142 2026-03-10T08:32:23.757 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 101/142 2026-03-10T08:32:23.757 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 102/142 2026-03-10T08:32:23.757 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 103/142 2026-03-10T08:32:23.757 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 104/142 2026-03-10T08:32:23.757 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 105/142 2026-03-10T08:32:23.757 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 106/142 2026-03-10T08:32:23.757 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 107/142 2026-03-10T08:32:23.757 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 108/142 2026-03-10T08:32:23.757 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 109/142 2026-03-10T08:32:23.757 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 110/142 2026-03-10T08:32:23.757 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 111/142 2026-03-10T08:32:23.757 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 112/142 2026-03-10T08:32:23.757 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 113/142 2026-03-10T08:32:23.757 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 114/142 2026-03-10T08:32:23.757 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 115/142 2026-03-10T08:32:23.757 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 116/142 2026-03-10T08:32:23.757 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 117/142 2026-03-10T08:32:23.757 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 118/142 2026-03-10T08:32:23.757 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 119/142 2026-03-10T08:32:23.757 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 120/142 2026-03-10T08:32:23.757 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 121/142 2026-03-10T08:32:23.757 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 122/142 2026-03-10T08:32:23.757 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 123/142 2026-03-10T08:32:23.757 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 124/142 2026-03-10T08:32:23.757 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 125/142 2026-03-10T08:32:23.758 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 126/142 2026-03-10T08:32:23.758 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 127/142 2026-03-10T08:32:23.758 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 128/142 2026-03-10T08:32:23.758 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 129/142 2026-03-10T08:32:23.758 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 130/142 2026-03-10T08:32:23.758 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 131/142 2026-03-10T08:32:23.758 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 132/142 2026-03-10T08:32:23.758 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 133/142 2026-03-10T08:32:23.758 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 134/142 2026-03-10T08:32:23.758 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-xmltodict-0.12.0-15.el9.noarch 135/142 2026-03-10T08:32:23.758 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 136/142 2026-03-10T08:32:23.758 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : re2-1:20211101-20.el9.x86_64 137/142 2026-03-10T08:32:23.758 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 138/142 2026-03-10T08:32:23.758 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 139/142 2026-03-10T08:32:23.758 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : librados2-2:16.2.4-5.el9.x86_64 140/142 2026-03-10T08:32:23.758 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 141/142 2026-03-10T08:32:23.864 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : librbd1-2:16.2.4-5.el9.x86_64 142/142 2026-03-10T08:32:23.864 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:32:23.864 INFO:teuthology.orchestra.run.vm06.stdout:Upgraded: 2026-03-10T08:32:23.864 INFO:teuthology.orchestra.run.vm06.stdout: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:32:23.864 INFO:teuthology.orchestra.run.vm06.stdout: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:32:23.864 INFO:teuthology.orchestra.run.vm06.stdout:Installed: 2026-03-10T08:32:23.864 INFO:teuthology.orchestra.run.vm06.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-03-10T08:32:23.864 INFO:teuthology.orchestra.run.vm06.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-10T08:32:23.864 INFO:teuthology.orchestra.run.vm06.stdout: ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:32:23.865 INFO:teuthology.orchestra.run.vm06.stdout: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:32:23.865 INFO:teuthology.orchestra.run.vm06.stdout: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:32:23.865 INFO:teuthology.orchestra.run.vm06.stdout: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:32:23.865 INFO:teuthology.orchestra.run.vm06.stdout: ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T08:32:23.865 INFO:teuthology.orchestra.run.vm06.stdout: ceph-immutable-object-cache-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:32:23.865 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:32:23.865 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:32:23.865 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T08:32:23.865 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T08:32:23.865 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T08:32:23.865 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T08:32:23.865 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T08:32:23.865 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:32:23.865 INFO:teuthology.orchestra.run.vm06.stdout: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:32:23.865 INFO:teuthology.orchestra.run.vm06.stdout: ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T08:32:23.865 INFO:teuthology.orchestra.run.vm06.stdout: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:32:23.865 INFO:teuthology.orchestra.run.vm06.stdout: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:32:23.865 INFO:teuthology.orchestra.run.vm06.stdout: ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:32:23.865 INFO:teuthology.orchestra.run.vm06.stdout: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T08:32:23.865 INFO:teuthology.orchestra.run.vm06.stdout: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T08:32:23.865 INFO:teuthology.orchestra.run.vm06.stdout: cryptsetup-2.8.1-3.el9.x86_64 2026-03-10T08:32:23.865 INFO:teuthology.orchestra.run.vm06.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-10T08:32:23.865 INFO:teuthology.orchestra.run.vm06.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-10T08:32:23.865 INFO:teuthology.orchestra.run.vm06.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-10T08:32:23.865 INFO:teuthology.orchestra.run.vm06.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-10T08:32:23.865 INFO:teuthology.orchestra.run.vm06.stdout: grpc-data-1.46.7-10.el9.noarch 2026-03-10T08:32:23.865 INFO:teuthology.orchestra.run.vm06.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-10T08:32:23.865 INFO:teuthology.orchestra.run.vm06.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-10T08:32:23.865 INFO:teuthology.orchestra.run.vm06.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-10T08:32:23.865 INFO:teuthology.orchestra.run.vm06.stdout: libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:32:23.865 INFO:teuthology.orchestra.run.vm06.stdout: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:32:23.865 INFO:teuthology.orchestra.run.vm06.stdout: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:32:23.865 INFO:teuthology.orchestra.run.vm06.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-10T08:32:23.865 INFO:teuthology.orchestra.run.vm06.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-10T08:32:23.865 INFO:teuthology.orchestra.run.vm06.stdout: libnbd-1.20.3-4.el9.x86_64 2026-03-10T08:32:23.865 INFO:teuthology.orchestra.run.vm06.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-10T08:32:23.865 INFO:teuthology.orchestra.run.vm06.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-10T08:32:23.865 INFO:teuthology.orchestra.run.vm06.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-10T08:32:23.865 INFO:teuthology.orchestra.run.vm06.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-10T08:32:23.865 INFO:teuthology.orchestra.run.vm06.stdout: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:32:23.865 INFO:teuthology.orchestra.run.vm06.stdout: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:32:23.866 INFO:teuthology.orchestra.run.vm06.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-10T08:32:23.866 INFO:teuthology.orchestra.run.vm06.stdout: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:32:23.866 INFO:teuthology.orchestra.run.vm06.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T08:32:23.866 INFO:teuthology.orchestra.run.vm06.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-10T08:32:23.866 INFO:teuthology.orchestra.run.vm06.stdout: libxslt-1.1.34-12.el9.x86_64 2026-03-10T08:32:23.866 INFO:teuthology.orchestra.run.vm06.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-10T08:32:23.866 INFO:teuthology.orchestra.run.vm06.stdout: lua-5.4.4-4.el9.x86_64 2026-03-10T08:32:23.866 INFO:teuthology.orchestra.run.vm06.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-10T08:32:23.866 INFO:teuthology.orchestra.run.vm06.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-10T08:32:23.866 INFO:teuthology.orchestra.run.vm06.stdout: mailcap-2.1.49-5.el9.noarch 2026-03-10T08:32:23.866 INFO:teuthology.orchestra.run.vm06.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-10T08:32:23.866 INFO:teuthology.orchestra.run.vm06.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-10T08:32:23.866 INFO:teuthology.orchestra.run.vm06.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-10T08:32:23.866 INFO:teuthology.orchestra.run.vm06.stdout: pciutils-3.7.0-7.el9.x86_64 2026-03-10T08:32:23.866 INFO:teuthology.orchestra.run.vm06.stdout: protobuf-3.14.0-17.el9.x86_64 2026-03-10T08:32:23.866 INFO:teuthology.orchestra.run.vm06.stdout: protobuf-compiler-3.14.0-17.el9.x86_64 2026-03-10T08:32:23.866 INFO:teuthology.orchestra.run.vm06.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-10T08:32:23.866 INFO:teuthology.orchestra.run.vm06.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-10T08:32:23.866 INFO:teuthology.orchestra.run.vm06.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-10T08:32:23.866 INFO:teuthology.orchestra.run.vm06.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-10T08:32:23.866 INFO:teuthology.orchestra.run.vm06.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-10T08:32:23.866 INFO:teuthology.orchestra.run.vm06.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-10T08:32:23.866 INFO:teuthology.orchestra.run.vm06.stdout: python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:32:23.866 INFO:teuthology.orchestra.run.vm06.stdout: python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:32:23.866 INFO:teuthology.orchestra.run.vm06.stdout: python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:32:23.866 INFO:teuthology.orchestra.run.vm06.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-10T08:32:23.866 INFO:teuthology.orchestra.run.vm06.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-10T08:32:23.866 INFO:teuthology.orchestra.run.vm06.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-10T08:32:23.866 INFO:teuthology.orchestra.run.vm06.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-10T08:32:23.866 INFO:teuthology.orchestra.run.vm06.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-10T08:32:23.866 INFO:teuthology.orchestra.run.vm06.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-10T08:32:23.866 INFO:teuthology.orchestra.run.vm06.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-10T08:32:23.866 INFO:teuthology.orchestra.run.vm06.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-03-10T08:32:23.866 INFO:teuthology.orchestra.run.vm06.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-03-10T08:32:23.866 INFO:teuthology.orchestra.run.vm06.stdout: python3-iniconfig-1.1.1-7.el9.noarch 2026-03-10T08:32:23.866 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-10T08:32:23.866 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-10T08:32:23.866 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-10T08:32:23.866 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-10T08:32:23.866 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-10T08:32:23.866 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-10T08:32:23.867 INFO:teuthology.orchestra.run.vm06.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-10T08:32:23.867 INFO:teuthology.orchestra.run.vm06.stdout: python3-jmespath-1.0.1-1.el9.noarch 2026-03-10T08:32:23.867 INFO:teuthology.orchestra.run.vm06.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-10T08:32:23.867 INFO:teuthology.orchestra.run.vm06.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T08:32:23.867 INFO:teuthology.orchestra.run.vm06.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-10T08:32:23.867 INFO:teuthology.orchestra.run.vm06.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-10T08:32:23.867 INFO:teuthology.orchestra.run.vm06.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-10T08:32:23.867 INFO:teuthology.orchestra.run.vm06.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-10T08:32:23.867 INFO:teuthology.orchestra.run.vm06.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-10T08:32:23.867 INFO:teuthology.orchestra.run.vm06.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-10T08:32:23.867 INFO:teuthology.orchestra.run.vm06.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-10T08:32:23.867 INFO:teuthology.orchestra.run.vm06.stdout: python3-packaging-20.9-5.el9.noarch 2026-03-10T08:32:23.867 INFO:teuthology.orchestra.run.vm06.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-10T08:32:23.867 INFO:teuthology.orchestra.run.vm06.stdout: python3-pluggy-0.13.1-7.el9.noarch 2026-03-10T08:32:23.867 INFO:teuthology.orchestra.run.vm06.stdout: python3-ply-3.11-14.el9.noarch 2026-03-10T08:32:23.867 INFO:teuthology.orchestra.run.vm06.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-10T08:32:23.867 INFO:teuthology.orchestra.run.vm06.stdout: python3-protobuf-3.14.0-17.el9.noarch 2026-03-10T08:32:23.867 INFO:teuthology.orchestra.run.vm06.stdout: python3-py-1.10.0-6.el9.noarch 2026-03-10T08:32:23.867 INFO:teuthology.orchestra.run.vm06.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-10T08:32:23.867 INFO:teuthology.orchestra.run.vm06.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-10T08:32:23.867 INFO:teuthology.orchestra.run.vm06.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-10T08:32:23.867 INFO:teuthology.orchestra.run.vm06.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-10T08:32:23.867 INFO:teuthology.orchestra.run.vm06.stdout: python3-pytest-6.2.2-7.el9.noarch 2026-03-10T08:32:23.867 INFO:teuthology.orchestra.run.vm06.stdout: python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:32:23.867 INFO:teuthology.orchestra.run.vm06.stdout: python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:32:23.867 INFO:teuthology.orchestra.run.vm06.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-10T08:32:23.867 INFO:teuthology.orchestra.run.vm06.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-10T08:32:23.867 INFO:teuthology.orchestra.run.vm06.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-10T08:32:23.867 INFO:teuthology.orchestra.run.vm06.stdout: python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:32:23.867 INFO:teuthology.orchestra.run.vm06.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-10T08:32:23.867 INFO:teuthology.orchestra.run.vm06.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-10T08:32:23.867 INFO:teuthology.orchestra.run.vm06.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-10T08:32:23.867 INFO:teuthology.orchestra.run.vm06.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-10T08:32:23.867 INFO:teuthology.orchestra.run.vm06.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-10T08:32:23.867 INFO:teuthology.orchestra.run.vm06.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-10T08:32:23.867 INFO:teuthology.orchestra.run.vm06.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-10T08:32:23.867 INFO:teuthology.orchestra.run.vm06.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-10T08:32:23.867 INFO:teuthology.orchestra.run.vm06.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-10T08:32:23.867 INFO:teuthology.orchestra.run.vm06.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-10T08:32:23.867 INFO:teuthology.orchestra.run.vm06.stdout: python3-xmltodict-0.12.0-15.el9.noarch 2026-03-10T08:32:23.868 INFO:teuthology.orchestra.run.vm06.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-10T08:32:23.868 INFO:teuthology.orchestra.run.vm06.stdout: qatlib-25.08.0-2.el9.x86_64 2026-03-10T08:32:23.868 INFO:teuthology.orchestra.run.vm06.stdout: qatlib-service-25.08.0-2.el9.x86_64 2026-03-10T08:32:23.868 INFO:teuthology.orchestra.run.vm06.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-03-10T08:32:23.868 INFO:teuthology.orchestra.run.vm06.stdout: rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:32:23.868 INFO:teuthology.orchestra.run.vm06.stdout: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:32:23.868 INFO:teuthology.orchestra.run.vm06.stdout: rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:32:23.868 INFO:teuthology.orchestra.run.vm06.stdout: re2-1:20211101-20.el9.x86_64 2026-03-10T08:32:23.868 INFO:teuthology.orchestra.run.vm06.stdout: socat-1.7.4.1-8.el9.x86_64 2026-03-10T08:32:23.868 INFO:teuthology.orchestra.run.vm06.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-10T08:32:23.868 INFO:teuthology.orchestra.run.vm06.stdout: unzip-6.0-59.el9.x86_64 2026-03-10T08:32:23.868 INFO:teuthology.orchestra.run.vm06.stdout: xmlstarlet-1.6.1-20.el9.x86_64 2026-03-10T08:32:23.868 INFO:teuthology.orchestra.run.vm06.stdout: zip-3.0-35.el9.x86_64 2026-03-10T08:32:23.868 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:32:23.868 INFO:teuthology.orchestra.run.vm06.stdout:Complete! 2026-03-10T08:32:23.972 DEBUG:teuthology.parallel:result is None 2026-03-10T08:32:23.972 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T08:32:24.654 DEBUG:teuthology.orchestra.run.vm03:> rpm -q ceph --qf '%{VERSION}-%{RELEASE}' 2026-03-10T08:32:24.673 INFO:teuthology.orchestra.run.vm03.stdout:19.2.3-678.ge911bdeb.el9 2026-03-10T08:32:24.673 INFO:teuthology.packaging:The installed version of ceph is 19.2.3-678.ge911bdeb.el9 2026-03-10T08:32:24.673 INFO:teuthology.task.install:The correct ceph version 19.2.3-678.ge911bdeb is installed. 2026-03-10T08:32:24.675 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T08:32:25.375 DEBUG:teuthology.orchestra.run.vm06:> rpm -q ceph --qf '%{VERSION}-%{RELEASE}' 2026-03-10T08:32:25.396 INFO:teuthology.orchestra.run.vm06.stdout:19.2.3-678.ge911bdeb.el9 2026-03-10T08:32:25.396 INFO:teuthology.packaging:The installed version of ceph is 19.2.3-678.ge911bdeb.el9 2026-03-10T08:32:25.396 INFO:teuthology.task.install:The correct ceph version 19.2.3-678.ge911bdeb is installed. 2026-03-10T08:32:25.397 INFO:teuthology.task.install.util:Shipping valgrind.supp... 2026-03-10T08:32:25.397 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-10T08:32:25.397 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-10T08:32:25.425 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-10T08:32:25.425 DEBUG:teuthology.orchestra.run.vm06:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-10T08:32:25.463 INFO:teuthology.task.install.util:Shipping 'daemon-helper'... 2026-03-10T08:32:25.464 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-10T08:32:25.464 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/usr/bin/daemon-helper 2026-03-10T08:32:25.491 DEBUG:teuthology.orchestra.run.vm03:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-10T08:32:25.555 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-10T08:32:25.555 DEBUG:teuthology.orchestra.run.vm06:> sudo dd of=/usr/bin/daemon-helper 2026-03-10T08:32:25.581 DEBUG:teuthology.orchestra.run.vm06:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-10T08:32:25.648 INFO:teuthology.task.install.util:Shipping 'adjust-ulimits'... 2026-03-10T08:32:25.648 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-10T08:32:25.648 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-10T08:32:25.678 DEBUG:teuthology.orchestra.run.vm03:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-10T08:32:25.744 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-10T08:32:25.744 DEBUG:teuthology.orchestra.run.vm06:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-10T08:32:25.770 DEBUG:teuthology.orchestra.run.vm06:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-10T08:32:25.834 INFO:teuthology.task.install.util:Shipping 'stdin-killer'... 2026-03-10T08:32:25.835 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-10T08:32:25.835 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/usr/bin/stdin-killer 2026-03-10T08:32:25.862 DEBUG:teuthology.orchestra.run.vm03:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-10T08:32:25.929 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-10T08:32:25.929 DEBUG:teuthology.orchestra.run.vm06:> sudo dd of=/usr/bin/stdin-killer 2026-03-10T08:32:25.956 DEBUG:teuthology.orchestra.run.vm06:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-10T08:32:26.023 INFO:teuthology.run_tasks:Running task cephadm... 2026-03-10T08:32:26.066 INFO:tasks.cephadm:Config: {'conf': {'mgr': {'debug mgr': 20, 'debug ms': 1}, 'global': {'mon election default strategy': 3, 'ms bind msgr2': False, 'ms type': 'async'}, 'mon': {'debug mon': 20, 'debug ms': 1, 'debug paxos': 20}, 'osd': {'debug ms': 1, 'debug osd': 20, 'osd mclock iops capacity threshold hdd': 49000, 'osd shutdown pgref assert': True}}, 'flavor': 'default', 'log-ignorelist': ['\\(MDS_ALL_DOWN\\)', '\\(MDS_UP_LESS_THAN_MAX\\)', 'but it is still running', 'overall HEALTH_', '\\(OSDMAP_FLAGS\\)', '\\(PG_', '\\(OSD_', '\\(OBJECT_', '\\(POOL_APP_NOT_ENABLED\\)'], 'log-only-match': ['CEPHADM_'], 'mon_bind_msgr2': False, 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'cephadm_mode': 'root'} 2026-03-10T08:32:26.066 INFO:tasks.cephadm:Cluster image is quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T08:32:26.066 INFO:tasks.cephadm:Cluster fsid is aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 2026-03-10T08:32:26.066 INFO:tasks.cephadm:Choosing monitor IPs and ports... 2026-03-10T08:32:26.066 INFO:tasks.cephadm:Monitor IPs: {'mon.a': '[v1:192.168.123.103:6789]', 'mon.c': '[v1:192.168.123.103:6790]', 'mon.b': '[v1:192.168.123.106:6789]'} 2026-03-10T08:32:26.066 INFO:tasks.cephadm:First mon is mon.a on vm03 2026-03-10T08:32:26.066 INFO:tasks.cephadm:First mgr is y 2026-03-10T08:32:26.066 INFO:tasks.cephadm:Normalizing hostnames... 2026-03-10T08:32:26.066 DEBUG:teuthology.orchestra.run.vm03:> sudo hostname $(hostname -s) 2026-03-10T08:32:26.093 DEBUG:teuthology.orchestra.run.vm06:> sudo hostname $(hostname -s) 2026-03-10T08:32:26.120 INFO:tasks.cephadm:Downloading "compiled" cephadm from cachra 2026-03-10T08:32:26.120 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T08:32:27.507 INFO:tasks.cephadm:builder_project result: [{'url': 'https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/', 'chacra_url': 'https://3.chacra.ceph.com/repos/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/', 'ref': 'squid', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'distro': 'centos', 'distro_version': '9', 'distro_codename': None, 'modified': '2026-02-25 18:55:15.146628', 'status': 'ready', 'flavor': 'default', 'project': 'ceph', 'archs': ['source', 'x86_64'], 'extra': {'version': '19.2.3-678-ge911bdeb', 'package_manager_version': '19.2.3-678.ge911bdeb', 'build_url': 'https://jenkins.ceph.com/job/ceph-dev-pipeline/3275/', 'root_build_cause': '', 'node_name': '10.20.192.26+soko16', 'job_name': 'ceph-dev-pipeline'}}] 2026-03-10T08:32:28.134 INFO:tasks.util.chacra:got chacra host 3.chacra.ceph.com, ref squid, sha1 e911bdebe5c8faa3800735d1568fcdca65db60df from https://shaman.ceph.com/api/search/?project=ceph&distros=centos%2F9%2Fx86_64&flavor=default&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T08:32:28.135 INFO:tasks.cephadm:Discovered cachra url: https://3.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/x86_64/flavors/default/cephadm 2026-03-10T08:32:28.135 INFO:tasks.cephadm:Downloading cephadm from url: https://3.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/x86_64/flavors/default/cephadm 2026-03-10T08:32:28.135 DEBUG:teuthology.orchestra.run.vm03:> curl --silent -L https://3.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/x86_64/flavors/default/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-10T08:32:33.237 INFO:teuthology.orchestra.run.vm03.stdout:-rw-r--r--. 1 ubuntu ubuntu 788355 Mar 10 08:32 /home/ubuntu/cephtest/cephadm 2026-03-10T08:32:33.238 DEBUG:teuthology.orchestra.run.vm06:> curl --silent -L https://3.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/x86_64/flavors/default/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-10T08:32:38.221 INFO:teuthology.orchestra.run.vm06.stdout:-rw-r--r--. 1 ubuntu ubuntu 788355 Mar 10 08:32 /home/ubuntu/cephtest/cephadm 2026-03-10T08:32:38.221 DEBUG:teuthology.orchestra.run.vm03:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-10T08:32:38.237 DEBUG:teuthology.orchestra.run.vm06:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-10T08:32:38.257 INFO:tasks.cephadm:Pulling image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df on all hosts... 2026-03-10T08:32:38.258 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-10T08:32:38.279 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-10T08:32:38.484 INFO:teuthology.orchestra.run.vm03.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T08:32:38.489 INFO:teuthology.orchestra.run.vm06.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T08:33:34.497 INFO:teuthology.orchestra.run.vm03.stdout:{ 2026-03-10T08:33:34.498 INFO:teuthology.orchestra.run.vm03.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-10T08:33:34.498 INFO:teuthology.orchestra.run.vm03.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-10T08:33:34.498 INFO:teuthology.orchestra.run.vm03.stdout: "repo_digests": [ 2026-03-10T08:33:34.498 INFO:teuthology.orchestra.run.vm03.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-10T08:33:34.498 INFO:teuthology.orchestra.run.vm03.stdout: ] 2026-03-10T08:33:34.498 INFO:teuthology.orchestra.run.vm03.stdout:} 2026-03-10T08:33:46.543 INFO:teuthology.orchestra.run.vm06.stdout:{ 2026-03-10T08:33:46.543 INFO:teuthology.orchestra.run.vm06.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-10T08:33:46.543 INFO:teuthology.orchestra.run.vm06.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-10T08:33:46.543 INFO:teuthology.orchestra.run.vm06.stdout: "repo_digests": [ 2026-03-10T08:33:46.543 INFO:teuthology.orchestra.run.vm06.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-10T08:33:46.543 INFO:teuthology.orchestra.run.vm06.stdout: ] 2026-03-10T08:33:46.543 INFO:teuthology.orchestra.run.vm06.stdout:} 2026-03-10T08:33:46.563 DEBUG:teuthology.orchestra.run.vm03:> sudo mkdir -p /etc/ceph 2026-03-10T08:33:46.592 DEBUG:teuthology.orchestra.run.vm06:> sudo mkdir -p /etc/ceph 2026-03-10T08:33:46.620 DEBUG:teuthology.orchestra.run.vm03:> sudo chmod 777 /etc/ceph 2026-03-10T08:33:46.656 DEBUG:teuthology.orchestra.run.vm06:> sudo chmod 777 /etc/ceph 2026-03-10T08:33:46.708 INFO:tasks.cephadm:Writing seed config... 2026-03-10T08:33:46.708 INFO:tasks.cephadm: override: [mgr] debug mgr = 20 2026-03-10T08:33:46.708 INFO:tasks.cephadm: override: [mgr] debug ms = 1 2026-03-10T08:33:46.708 INFO:tasks.cephadm: override: [global] mon election default strategy = 3 2026-03-10T08:33:46.708 INFO:tasks.cephadm: override: [global] ms bind msgr2 = False 2026-03-10T08:33:46.708 INFO:tasks.cephadm: override: [global] ms type = async 2026-03-10T08:33:46.708 INFO:tasks.cephadm: override: [mon] debug mon = 20 2026-03-10T08:33:46.708 INFO:tasks.cephadm: override: [mon] debug ms = 1 2026-03-10T08:33:46.708 INFO:tasks.cephadm: override: [mon] debug paxos = 20 2026-03-10T08:33:46.708 INFO:tasks.cephadm: override: [osd] debug ms = 1 2026-03-10T08:33:46.709 INFO:tasks.cephadm: override: [osd] debug osd = 20 2026-03-10T08:33:46.709 INFO:tasks.cephadm: override: [osd] osd mclock iops capacity threshold hdd = 49000 2026-03-10T08:33:46.709 INFO:tasks.cephadm: override: [osd] osd shutdown pgref assert = True 2026-03-10T08:33:46.709 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-10T08:33:46.709 DEBUG:teuthology.orchestra.run.vm03:> dd of=/home/ubuntu/cephtest/seed.ceph.conf 2026-03-10T08:33:46.723 DEBUG:tasks.cephadm:Final config: [global] # make logging friendly to teuthology log_to_file = true log_to_stderr = false log to journald = false mon cluster log to file = true mon cluster log file level = debug mon clock drift allowed = 1.000 # replicate across OSDs, not hosts osd crush chooseleaf type = 0 #osd pool default size = 2 osd pool default erasure code profile = plugin=jerasure technique=reed_sol_van k=2 m=1 crush-failure-domain=osd # enable some debugging auth debug = true ms die on old message = true ms die on bug = true debug asserts on shutdown = true # adjust warnings mon max pg per osd = 10000# >= luminous mon pg warn max object skew = 0 mon osd allow primary affinity = true mon osd allow pg remap = true mon warn on legacy crush tunables = false mon warn on crush straw calc version zero = false mon warn on no sortbitwise = false mon warn on osd down out interval zero = false mon warn on too few osds = false mon_warn_on_pool_pg_num_not_power_of_two = false # disable pg_autoscaler by default for new pools osd_pool_default_pg_autoscale_mode = off # tests delete pools mon allow pool delete = true fsid = aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 mon election default strategy = 3 ms bind msgr2 = False ms type = async [osd] osd scrub load threshold = 5.0 osd scrub max interval = 600 osd mclock profile = high_recovery_ops osd recover clone overlap = true osd recovery max chunk = 1048576 osd deep scrub update digest min age = 30 osd map max advance = 10 osd memory target autotune = true # debugging osd debug shutdown = true osd debug op order = true osd debug verify stray on activate = true osd debug pg log writeout = true osd debug verify cached snaps = true osd debug verify missing on start = true osd debug misdirected ops = true osd op queue = debug_random osd op queue cut off = debug_random osd shutdown pgref assert = True bdev debug aio = true osd sloppy crc = true debug ms = 1 debug osd = 20 osd mclock iops capacity threshold hdd = 49000 [mgr] mon reweight min pgs per osd = 4 mon reweight min bytes per osd = 10 mgr/telemetry/nag = false debug mgr = 20 debug ms = 1 [mon] mon data avail warn = 5 mon mgr mkfs grace = 240 mon reweight min pgs per osd = 4 mon osd reporter subtree level = osd mon osd prime pg temp = true mon reweight min bytes per osd = 10 # rotate auth tickets quickly to exercise renewal paths auth mon ticket ttl = 660# 11m auth service ticket ttl = 240# 4m # don't complain about global id reclaim mon_warn_on_insecure_global_id_reclaim = false mon_warn_on_insecure_global_id_reclaim_allowed = false debug mon = 20 debug ms = 1 debug paxos = 20 [client.rgw] rgw cache enabled = true rgw enable ops log = true rgw enable usage log = true 2026-03-10T08:33:46.723 DEBUG:teuthology.orchestra.run.vm03:mon.a> sudo journalctl -f -n 0 -u ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@mon.a.service 2026-03-10T08:33:46.765 DEBUG:teuthology.orchestra.run.vm03:mgr.y> sudo journalctl -f -n 0 -u ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@mgr.y.service 2026-03-10T08:33:46.819 INFO:tasks.cephadm:Bootstrapping... 2026-03-10T08:33:46.820 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df -v bootstrap --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id y --orphan-initial-daemons --skip-monitoring-stack --mon-addrv '[v1:192.168.123.103:6789]' --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring 2026-03-10T08:33:46.969 INFO:teuthology.orchestra.run.vm03.stdout:-------------------------------------------------------------------------------- 2026-03-10T08:33:46.969 INFO:teuthology.orchestra.run.vm03.stdout:cephadm ['--image', 'quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df', '-v', 'bootstrap', '--fsid', 'aaf0329a-1c5b-11f1-8b6f-7f2d819bb543', '--config', '/home/ubuntu/cephtest/seed.ceph.conf', '--output-config', '/etc/ceph/ceph.conf', '--output-keyring', '/etc/ceph/ceph.client.admin.keyring', '--output-pub-ssh-key', '/home/ubuntu/cephtest/ceph.pub', '--mon-id', 'a', '--mgr-id', 'y', '--orphan-initial-daemons', '--skip-monitoring-stack', '--mon-addrv', '[v1:192.168.123.103:6789]', '--skip-admin-label'] 2026-03-10T08:33:46.969 INFO:teuthology.orchestra.run.vm03.stderr:Specifying an fsid for your cluster offers no advantages and may increase the likelihood of fsid conflicts. 2026-03-10T08:33:46.969 INFO:teuthology.orchestra.run.vm03.stdout:Verifying podman|docker is present... 2026-03-10T08:33:46.989 INFO:teuthology.orchestra.run.vm03.stdout:/bin/podman: stdout 5.8.0 2026-03-10T08:33:46.989 INFO:teuthology.orchestra.run.vm03.stdout:Verifying lvm2 is present... 2026-03-10T08:33:46.989 INFO:teuthology.orchestra.run.vm03.stdout:Verifying time synchronization is in place... 2026-03-10T08:33:46.997 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-10T08:33:46.997 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-10T08:33:47.003 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-10T08:33:47.003 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stdout inactive 2026-03-10T08:33:47.010 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stdout enabled 2026-03-10T08:33:47.016 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stdout active 2026-03-10T08:33:47.016 INFO:teuthology.orchestra.run.vm03.stdout:Unit chronyd.service is enabled and running 2026-03-10T08:33:47.016 INFO:teuthology.orchestra.run.vm03.stdout:Repeating the final host check... 2026-03-10T08:33:47.035 INFO:teuthology.orchestra.run.vm03.stdout:/bin/podman: stdout 5.8.0 2026-03-10T08:33:47.035 INFO:teuthology.orchestra.run.vm03.stdout:podman (/bin/podman) version 5.8.0 is present 2026-03-10T08:33:47.035 INFO:teuthology.orchestra.run.vm03.stdout:systemctl is present 2026-03-10T08:33:47.035 INFO:teuthology.orchestra.run.vm03.stdout:lvcreate is present 2026-03-10T08:33:47.041 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-10T08:33:47.041 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-10T08:33:47.048 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-10T08:33:47.048 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stdout inactive 2026-03-10T08:33:47.054 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stdout enabled 2026-03-10T08:33:47.061 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stdout active 2026-03-10T08:33:47.061 INFO:teuthology.orchestra.run.vm03.stdout:Unit chronyd.service is enabled and running 2026-03-10T08:33:47.061 INFO:teuthology.orchestra.run.vm03.stdout:Host looks OK 2026-03-10T08:33:47.061 INFO:teuthology.orchestra.run.vm03.stdout:Cluster fsid: aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 2026-03-10T08:33:47.062 INFO:teuthology.orchestra.run.vm03.stdout:Acquiring lock 139859371016736 on /run/cephadm/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543.lock 2026-03-10T08:33:47.062 INFO:teuthology.orchestra.run.vm03.stdout:Lock 139859371016736 acquired on /run/cephadm/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543.lock 2026-03-10T08:33:47.062 INFO:teuthology.orchestra.run.vm03.stdout:Verifying IP 192.168.123.103 port 6789 ... 2026-03-10T08:33:47.062 INFO:teuthology.orchestra.run.vm03.stdout:Base mon IP(s) is [192.168.123.103:6789], mon addrv is [v1:192.168.123.103:6789] 2026-03-10T08:33:47.067 INFO:teuthology.orchestra.run.vm03.stdout:/sbin/ip: stdout default via 192.168.123.1 dev eth0 proto dhcp src 192.168.123.103 metric 100 2026-03-10T08:33:47.067 INFO:teuthology.orchestra.run.vm03.stdout:/sbin/ip: stdout 192.168.123.0/24 dev eth0 proto kernel scope link src 192.168.123.103 metric 100 2026-03-10T08:33:47.070 INFO:teuthology.orchestra.run.vm03.stdout:/sbin/ip: stdout ::1 dev lo proto kernel metric 256 pref medium 2026-03-10T08:33:47.070 INFO:teuthology.orchestra.run.vm03.stdout:/sbin/ip: stdout fe80::/64 dev eth0 proto kernel metric 1024 pref medium 2026-03-10T08:33:47.072 INFO:teuthology.orchestra.run.vm03.stdout:/sbin/ip: stdout 1: lo: mtu 65536 state UNKNOWN qlen 1000 2026-03-10T08:33:47.072 INFO:teuthology.orchestra.run.vm03.stdout:/sbin/ip: stdout inet6 ::1/128 scope host 2026-03-10T08:33:47.072 INFO:teuthology.orchestra.run.vm03.stdout:/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-10T08:33:47.073 INFO:teuthology.orchestra.run.vm03.stdout:/sbin/ip: stdout 2: eth0: mtu 1500 state UP qlen 1000 2026-03-10T08:33:47.073 INFO:teuthology.orchestra.run.vm03.stdout:/sbin/ip: stdout inet6 fe80::5055:ff:fe00:3/64 scope link noprefixroute 2026-03-10T08:33:47.073 INFO:teuthology.orchestra.run.vm03.stdout:/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-10T08:33:47.073 INFO:teuthology.orchestra.run.vm03.stdout:Mon IP `192.168.123.103` is in CIDR network `192.168.123.0/24` 2026-03-10T08:33:47.073 INFO:teuthology.orchestra.run.vm03.stdout:Inferred mon public CIDR from local network configuration ['192.168.123.0/24'] 2026-03-10T08:33:47.073 INFO:teuthology.orchestra.run.vm03.stdout:Internal network (--cluster-network) has not been provided, OSD replication will default to the public_network 2026-03-10T08:33:47.074 INFO:teuthology.orchestra.run.vm03.stdout:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T08:33:48.442 INFO:teuthology.orchestra.run.vm03.stdout:/bin/podman: stdout 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c 2026-03-10T08:33:48.442 INFO:teuthology.orchestra.run.vm03.stdout:/bin/podman: stderr Trying to pull quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T08:33:48.442 INFO:teuthology.orchestra.run.vm03.stdout:/bin/podman: stderr Getting image source signatures 2026-03-10T08:33:48.442 INFO:teuthology.orchestra.run.vm03.stdout:/bin/podman: stderr Copying blob sha256:1752b8d01aa0dd33bbe0ab24e8316174c94fbdcd5d26252e2680bba0624747a7 2026-03-10T08:33:48.442 INFO:teuthology.orchestra.run.vm03.stdout:/bin/podman: stderr Copying blob sha256:8e380faede39ebd4286247457b408d979ab568aafd8389c42ec304b8cfba4e92 2026-03-10T08:33:48.442 INFO:teuthology.orchestra.run.vm03.stdout:/bin/podman: stderr Copying config sha256:654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c 2026-03-10T08:33:48.442 INFO:teuthology.orchestra.run.vm03.stdout:/bin/podman: stderr Writing manifest to image destination 2026-03-10T08:33:48.613 INFO:teuthology.orchestra.run.vm03.stdout:ceph: stdout ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable) 2026-03-10T08:33:48.613 INFO:teuthology.orchestra.run.vm03.stdout:Ceph version: ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable) 2026-03-10T08:33:48.613 INFO:teuthology.orchestra.run.vm03.stdout:Extracting ceph user uid/gid from container image... 2026-03-10T08:33:48.700 INFO:teuthology.orchestra.run.vm03.stdout:stat: stdout 167 167 2026-03-10T08:33:48.700 INFO:teuthology.orchestra.run.vm03.stdout:Creating initial keys... 2026-03-10T08:33:48.788 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-authtool: stdout AQBs169phSjyLRAAuZS45M0KThdNWCGv7F080g== 2026-03-10T08:33:48.910 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-authtool: stdout AQBs169p1SgvNBAAVRlOD1Hn0mSzJ2mkV2uzFg== 2026-03-10T08:33:49.004 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-authtool: stdout AQBs169pWnDJOhAA0BwZKiuAGdzDF2RT99OJfA== 2026-03-10T08:33:49.004 INFO:teuthology.orchestra.run.vm03.stdout:Creating initial monmap... 2026-03-10T08:33:49.124 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-10T08:33:49.124 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/monmaptool: stdout setting min_mon_release = quincy 2026-03-10T08:33:49.124 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: set fsid to aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 2026-03-10T08:33:49.124 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-10T08:33:49.124 INFO:teuthology.orchestra.run.vm03.stdout:monmaptool for a [v1:192.168.123.103:6789] on /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-10T08:33:49.124 INFO:teuthology.orchestra.run.vm03.stdout:setting min_mon_release = quincy 2026-03-10T08:33:49.124 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/monmaptool: set fsid to aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 2026-03-10T08:33:49.124 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-10T08:33:49.124 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:33:49.124 INFO:teuthology.orchestra.run.vm03.stdout:Creating mon... 2026-03-10T08:33:49.258 INFO:teuthology.orchestra.run.vm03.stdout:create mon.a on 2026-03-10T08:33:49.419 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Removed "/etc/systemd/system/multi-user.target.wants/ceph.target". 2026-03-10T08:33:49.541 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /etc/systemd/system/ceph.target. 2026-03-10T08:33:49.685 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543.target → /etc/systemd/system/ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543.target. 2026-03-10T08:33:49.685 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph.target.wants/ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543.target → /etc/systemd/system/ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543.target. 2026-03-10T08:33:49.829 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@mon.a 2026-03-10T08:33:49.829 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Failed to reset failed state of unit ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@mon.a.service: Unit ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@mon.a.service not loaded. 2026-03-10T08:33:49.968 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543.target.wants/ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@mon.a.service → /etc/systemd/system/ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@.service. 2026-03-10T08:33:50.149 INFO:teuthology.orchestra.run.vm03.stdout:firewalld does not appear to be present 2026-03-10T08:33:50.149 INFO:teuthology.orchestra.run.vm03.stdout:Not possible to enable service . firewalld.service is not available 2026-03-10T08:33:50.149 INFO:teuthology.orchestra.run.vm03.stdout:Waiting for mon to start... 2026-03-10T08:33:50.149 INFO:teuthology.orchestra.run.vm03.stdout:Waiting for mon... 2026-03-10T08:33:50.362 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout cluster: 2026-03-10T08:33:50.362 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout id: aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 2026-03-10T08:33:50.362 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout health: HEALTH_OK 2026-03-10T08:33:50.362 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-10T08:33:50.362 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout services: 2026-03-10T08:33:50.362 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon: 1 daemons, quorum a (age 0.160791s) 2026-03-10T08:33:50.362 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mgr: no daemons active 2026-03-10T08:33:50.362 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout osd: 0 osds: 0 up, 0 in 2026-03-10T08:33:50.362 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-10T08:33:50.362 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout data: 2026-03-10T08:33:50.362 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout pools: 0 pools, 0 pgs 2026-03-10T08:33:50.362 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout objects: 0 objects, 0 B 2026-03-10T08:33:50.362 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout usage: 0 B used, 0 B / 0 B avail 2026-03-10T08:33:50.362 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout pgs: 2026-03-10T08:33:50.362 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-10T08:33:50.362 INFO:teuthology.orchestra.run.vm03.stdout:mon is available 2026-03-10T08:33:50.362 INFO:teuthology.orchestra.run.vm03.stdout:Assimilating anything we can from ceph.conf... 2026-03-10T08:33:50.546 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-10T08:33:50.546 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout [global] 2026-03-10T08:33:50.546 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout fsid = aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 2026-03-10T08:33:50.546 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-10T08:33:50.546 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon_host = [v1:192.168.123.103:6789] 2026-03-10T08:33:50.546 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-10T08:33:50.546 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-10T08:33:50.546 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-10T08:33:50.547 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-10T08:33:50.547 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-10T08:33:50.547 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-10T08:33:50.547 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-10T08:33:50.547 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-10T08:33:50.547 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout [osd] 2026-03-10T08:33:50.547 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-10T08:33:50.547 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-10T08:33:50.547 INFO:teuthology.orchestra.run.vm03.stdout:Generating new minimal ceph.conf... 2026-03-10T08:33:50.725 INFO:teuthology.orchestra.run.vm03.stdout:Restarting the monitor... 2026-03-10T08:33:50.875 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:50 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mon-a[50418]: 2026-03-10T08:33:50.819+0000 7fb34b8c5640 -1 mon.a@0(leader) e1 *** Got Signal Terminated *** 2026-03-10T08:33:51.128 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:50 vm03 podman[50618]: 2026-03-10 08:33:50.873383294 +0000 UTC m=+0.068631558 container died 42efb1d63ce066513aed0ec584ccc9e45beaf123f3570819eabce8efe2fbf770 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mon-a, OSD_FLAVOR=default, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0) 2026-03-10T08:33:51.128 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:50 vm03 podman[50618]: 2026-03-10 08:33:50.889934382 +0000 UTC m=+0.085182636 container remove 42efb1d63ce066513aed0ec584ccc9e45beaf123f3570819eabce8efe2fbf770 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mon-a, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-10T08:33:51.129 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:50 vm03 bash[50618]: ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mon-a 2026-03-10T08:33:51.129 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:50 vm03 systemd[1]: ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@mon.a.service: Deactivated successfully. 2026-03-10T08:33:51.129 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:50 vm03 systemd[1]: Stopped Ceph mon.a for aaf0329a-1c5b-11f1-8b6f-7f2d819bb543. 2026-03-10T08:33:51.129 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:50 vm03 systemd[1]: Starting Ceph mon.a for aaf0329a-1c5b-11f1-8b6f-7f2d819bb543... 2026-03-10T08:33:51.319 INFO:teuthology.orchestra.run.vm03.stdout:Setting public_network to 192.168.123.0/24 in mon config section 2026-03-10T08:33:51.385 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 podman[50689]: 2026-03-10 08:33:51.127919777 +0000 UTC m=+0.087361234 container create 8042a210ce6ff0acc9683abf0fee51f83521f4c4c12e079392cda11b71572ef4 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mon-a, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) 2026-03-10T08:33:51.385 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 podman[50689]: 2026-03-10 08:33:51.05176336 +0000 UTC m=+0.011204827 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T08:33:51.385 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 podman[50689]: 2026-03-10 08:33:51.281041292 +0000 UTC m=+0.240482739 container init 8042a210ce6ff0acc9683abf0fee51f83521f4c4c12e079392cda11b71572ef4 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mon-a, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-10T08:33:51.385 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 podman[50689]: 2026-03-10 08:33:51.283929556 +0000 UTC m=+0.243371013 container start 8042a210ce6ff0acc9683abf0fee51f83521f4c4c12e079392cda11b71572ef4 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mon-a, org.label-schema.build-date=20260223, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-10T08:33:51.385 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: set uid:gid to 167:167 (ceph:ceph) 2026-03-10T08:33:51.385 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 2 2026-03-10T08:33:51.385 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: pidfile_write: ignore empty --pid-file 2026-03-10T08:33:51.385 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: load: jerasure load: lrc 2026-03-10T08:33:51.385 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: RocksDB version: 7.9.2 2026-03-10T08:33:51.385 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Git sha 0 2026-03-10T08:33:51.385 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Compile date 2026-02-25 18:11:04 2026-03-10T08:33:51.385 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: DB SUMMARY 2026-03-10T08:33:51.385 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: DB Session ID: 7JSGWKC347HYZ44BFMYK 2026-03-10T08:33:51.385 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: CURRENT file: CURRENT 2026-03-10T08:33:51.385 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: IDENTITY file: IDENTITY 2026-03-10T08:33:51.385 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: MANIFEST file: MANIFEST-000010 size: 179 Bytes 2026-03-10T08:33:51.385 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: SST files in /var/lib/ceph/mon/ceph-a/store.db dir, Total Num: 1, files: 000008.sst 2026-03-10T08:33:51.386 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-a/store.db: 000009.log size: 75973 ; 2026-03-10T08:33:51.386 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.error_if_exists: 0 2026-03-10T08:33:51.386 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.create_if_missing: 0 2026-03-10T08:33:51.386 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.paranoid_checks: 1 2026-03-10T08:33:51.386 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.flush_verify_memtable_count: 1 2026-03-10T08:33:51.386 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-10T08:33:51.386 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-10T08:33:51.386 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.env: 0x560a20dc7dc0 2026-03-10T08:33:51.386 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.fs: PosixFileSystem 2026-03-10T08:33:51.386 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.info_log: 0x560a22349820 2026-03-10T08:33:51.386 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.max_file_opening_threads: 16 2026-03-10T08:33:51.386 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.statistics: (nil) 2026-03-10T08:33:51.386 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.use_fsync: 0 2026-03-10T08:33:51.386 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.max_log_file_size: 0 2026-03-10T08:33:51.386 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-10T08:33:51.386 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.log_file_time_to_roll: 0 2026-03-10T08:33:51.386 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.keep_log_file_num: 1000 2026-03-10T08:33:51.386 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.recycle_log_file_num: 0 2026-03-10T08:33:51.386 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.allow_fallocate: 1 2026-03-10T08:33:51.386 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.allow_mmap_reads: 0 2026-03-10T08:33:51.386 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.allow_mmap_writes: 0 2026-03-10T08:33:51.386 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.use_direct_reads: 0 2026-03-10T08:33:51.386 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-10T08:33:51.386 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.create_missing_column_families: 0 2026-03-10T08:33:51.386 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.db_log_dir: 2026-03-10T08:33:51.386 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.wal_dir: 2026-03-10T08:33:51.386 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.table_cache_numshardbits: 6 2026-03-10T08:33:51.386 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.WAL_ttl_seconds: 0 2026-03-10T08:33:51.386 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.WAL_size_limit_MB: 0 2026-03-10T08:33:51.386 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-10T08:33:51.386 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-10T08:33:51.386 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.is_fd_close_on_exec: 1 2026-03-10T08:33:51.386 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.advise_random_on_open: 1 2026-03-10T08:33:51.386 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.db_write_buffer_size: 0 2026-03-10T08:33:51.386 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.write_buffer_manager: 0x560a2234d900 2026-03-10T08:33:51.386 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-10T08:33:51.386 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-10T08:33:51.386 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.use_adaptive_mutex: 0 2026-03-10T08:33:51.386 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.rate_limiter: (nil) 2026-03-10T08:33:51.386 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-10T08:33:51.387 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.wal_recovery_mode: 2 2026-03-10T08:33:51.387 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.enable_thread_tracking: 0 2026-03-10T08:33:51.387 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.enable_pipelined_write: 0 2026-03-10T08:33:51.387 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.unordered_write: 0 2026-03-10T08:33:51.387 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-10T08:33:51.387 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-10T08:33:51.387 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-10T08:33:51.387 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-10T08:33:51.387 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.row_cache: None 2026-03-10T08:33:51.387 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.wal_filter: None 2026-03-10T08:33:51.387 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-10T08:33:51.387 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.allow_ingest_behind: 0 2026-03-10T08:33:51.387 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.two_write_queues: 0 2026-03-10T08:33:51.387 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.manual_wal_flush: 0 2026-03-10T08:33:51.387 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.wal_compression: 0 2026-03-10T08:33:51.387 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.atomic_flush: 0 2026-03-10T08:33:51.387 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-10T08:33:51.387 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.persist_stats_to_disk: 0 2026-03-10T08:33:51.387 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.write_dbid_to_manifest: 0 2026-03-10T08:33:51.387 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.log_readahead_size: 0 2026-03-10T08:33:51.387 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-10T08:33:51.387 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.best_efforts_recovery: 0 2026-03-10T08:33:51.387 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-10T08:33:51.387 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-10T08:33:51.387 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.allow_data_in_errors: 0 2026-03-10T08:33:51.387 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.db_host_id: __hostname__ 2026-03-10T08:33:51.387 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.enforce_single_del_contracts: true 2026-03-10T08:33:51.387 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.max_background_jobs: 2 2026-03-10T08:33:51.387 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.max_background_compactions: -1 2026-03-10T08:33:51.387 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.max_subcompactions: 1 2026-03-10T08:33:51.387 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-10T08:33:51.387 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-10T08:33:51.387 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.delayed_write_rate : 16777216 2026-03-10T08:33:51.387 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.max_total_wal_size: 0 2026-03-10T08:33:51.387 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-10T08:33:51.387 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.stats_dump_period_sec: 600 2026-03-10T08:33:51.387 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.stats_persist_period_sec: 600 2026-03-10T08:33:51.387 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-10T08:33:51.387 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.max_open_files: -1 2026-03-10T08:33:51.387 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.bytes_per_sync: 0 2026-03-10T08:33:51.387 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.wal_bytes_per_sync: 0 2026-03-10T08:33:51.387 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.strict_bytes_per_sync: 0 2026-03-10T08:33:51.387 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.compaction_readahead_size: 0 2026-03-10T08:33:51.387 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.max_background_flushes: -1 2026-03-10T08:33:51.387 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Compression algorithms supported: 2026-03-10T08:33:51.387 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: kZSTD supported: 0 2026-03-10T08:33:51.387 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: kXpressCompression supported: 0 2026-03-10T08:33:51.387 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: kBZip2Compression supported: 0 2026-03-10T08:33:51.387 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-10T08:33:51.387 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: kLZ4Compression supported: 1 2026-03-10T08:33:51.387 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: kZlibCompression supported: 1 2026-03-10T08:33:51.387 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: kLZ4HCCompression supported: 1 2026-03-10T08:33:51.387 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: kSnappyCompression supported: 1 2026-03-10T08:33:51.387 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Fast CRC32 supported: Supported on x86 2026-03-10T08:33:51.387 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: DMutex implementation: pthread_mutex_t 2026-03-10T08:33:51.387 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000010 2026-03-10T08:33:51.387 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-10T08:33:51.387 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-10T08:33:51.387 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.merge_operator: 2026-03-10T08:33:51.388 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.compaction_filter: None 2026-03-10T08:33:51.388 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.compaction_filter_factory: None 2026-03-10T08:33:51.388 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.sst_partitioner_factory: None 2026-03-10T08:33:51.388 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.memtable_factory: SkipListFactory 2026-03-10T08:33:51.388 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.table_factory: BlockBasedTable 2026-03-10T08:33:51.388 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560a223483c0) 2026-03-10T08:33:51.388 INFO:journalctl@ceph.mon.a.vm03.stdout: cache_index_and_filter_blocks: 1 2026-03-10T08:33:51.388 INFO:journalctl@ceph.mon.a.vm03.stdout: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-10T08:33:51.388 INFO:journalctl@ceph.mon.a.vm03.stdout: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-10T08:33:51.388 INFO:journalctl@ceph.mon.a.vm03.stdout: pin_top_level_index_and_filter: 1 2026-03-10T08:33:51.388 INFO:journalctl@ceph.mon.a.vm03.stdout: index_type: 0 2026-03-10T08:33:51.388 INFO:journalctl@ceph.mon.a.vm03.stdout: data_block_index_type: 0 2026-03-10T08:33:51.388 INFO:journalctl@ceph.mon.a.vm03.stdout: index_shortening: 1 2026-03-10T08:33:51.388 INFO:journalctl@ceph.mon.a.vm03.stdout: data_block_hash_table_util_ratio: 0.750000 2026-03-10T08:33:51.388 INFO:journalctl@ceph.mon.a.vm03.stdout: checksum: 4 2026-03-10T08:33:51.388 INFO:journalctl@ceph.mon.a.vm03.stdout: no_block_cache: 0 2026-03-10T08:33:51.388 INFO:journalctl@ceph.mon.a.vm03.stdout: block_cache: 0x560a2236d350 2026-03-10T08:33:51.388 INFO:journalctl@ceph.mon.a.vm03.stdout: block_cache_name: BinnedLRUCache 2026-03-10T08:33:51.388 INFO:journalctl@ceph.mon.a.vm03.stdout: block_cache_options: 2026-03-10T08:33:51.388 INFO:journalctl@ceph.mon.a.vm03.stdout: capacity : 536870912 2026-03-10T08:33:51.388 INFO:journalctl@ceph.mon.a.vm03.stdout: num_shard_bits : 4 2026-03-10T08:33:51.388 INFO:journalctl@ceph.mon.a.vm03.stdout: strict_capacity_limit : 0 2026-03-10T08:33:51.388 INFO:journalctl@ceph.mon.a.vm03.stdout: high_pri_pool_ratio: 0.000 2026-03-10T08:33:51.388 INFO:journalctl@ceph.mon.a.vm03.stdout: block_cache_compressed: (nil) 2026-03-10T08:33:51.388 INFO:journalctl@ceph.mon.a.vm03.stdout: persistent_cache: (nil) 2026-03-10T08:33:51.388 INFO:journalctl@ceph.mon.a.vm03.stdout: block_size: 4096 2026-03-10T08:33:51.388 INFO:journalctl@ceph.mon.a.vm03.stdout: block_size_deviation: 10 2026-03-10T08:33:51.388 INFO:journalctl@ceph.mon.a.vm03.stdout: block_restart_interval: 16 2026-03-10T08:33:51.388 INFO:journalctl@ceph.mon.a.vm03.stdout: index_block_restart_interval: 1 2026-03-10T08:33:51.388 INFO:journalctl@ceph.mon.a.vm03.stdout: metadata_block_size: 4096 2026-03-10T08:33:51.388 INFO:journalctl@ceph.mon.a.vm03.stdout: partition_filters: 0 2026-03-10T08:33:51.388 INFO:journalctl@ceph.mon.a.vm03.stdout: use_delta_encoding: 1 2026-03-10T08:33:51.388 INFO:journalctl@ceph.mon.a.vm03.stdout: filter_policy: bloomfilter 2026-03-10T08:33:51.388 INFO:journalctl@ceph.mon.a.vm03.stdout: whole_key_filtering: 1 2026-03-10T08:33:51.388 INFO:journalctl@ceph.mon.a.vm03.stdout: verify_compression: 0 2026-03-10T08:33:51.388 INFO:journalctl@ceph.mon.a.vm03.stdout: read_amp_bytes_per_bit: 0 2026-03-10T08:33:51.388 INFO:journalctl@ceph.mon.a.vm03.stdout: format_version: 5 2026-03-10T08:33:51.388 INFO:journalctl@ceph.mon.a.vm03.stdout: enable_index_compression: 1 2026-03-10T08:33:51.388 INFO:journalctl@ceph.mon.a.vm03.stdout: block_align: 0 2026-03-10T08:33:51.388 INFO:journalctl@ceph.mon.a.vm03.stdout: max_auto_readahead_size: 262144 2026-03-10T08:33:51.388 INFO:journalctl@ceph.mon.a.vm03.stdout: prepopulate_block_cache: 0 2026-03-10T08:33:51.388 INFO:journalctl@ceph.mon.a.vm03.stdout: initial_auto_readahead_size: 8192 2026-03-10T08:33:51.388 INFO:journalctl@ceph.mon.a.vm03.stdout: num_file_reads_for_auto_readahead: 2 2026-03-10T08:33:51.388 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.write_buffer_size: 33554432 2026-03-10T08:33:51.388 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.max_write_buffer_number: 2 2026-03-10T08:33:51.388 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.compression: NoCompression 2026-03-10T08:33:51.388 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.bottommost_compression: Disabled 2026-03-10T08:33:51.388 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.prefix_extractor: nullptr 2026-03-10T08:33:51.388 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-10T08:33:51.388 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.num_levels: 7 2026-03-10T08:33:51.388 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-10T08:33:51.388 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-10T08:33:51.388 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-10T08:33:51.388 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-10T08:33:51.388 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-10T08:33:51.388 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-10T08:33:51.388 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-10T08:33:51.388 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-10T08:33:51.388 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-10T08:33:51.388 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-10T08:33:51.388 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-10T08:33:51.389 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-10T08:33:51.389 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.compression_opts.window_bits: -14 2026-03-10T08:33:51.389 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.compression_opts.level: 32767 2026-03-10T08:33:51.389 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.compression_opts.strategy: 0 2026-03-10T08:33:51.389 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-10T08:33:51.389 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-10T08:33:51.389 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-10T08:33:51.389 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-10T08:33:51.389 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.compression_opts.enabled: false 2026-03-10T08:33:51.389 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-10T08:33:51.389 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-10T08:33:51.389 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-10T08:33:51.389 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-10T08:33:51.389 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.target_file_size_base: 67108864 2026-03-10T08:33:51.389 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.target_file_size_multiplier: 1 2026-03-10T08:33:51.389 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-10T08:33:51.389 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-10T08:33:51.389 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-10T08:33:51.389 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-10T08:33:51.389 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-10T08:33:51.389 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-10T08:33:51.389 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-10T08:33:51.389 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-10T08:33:51.389 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-10T08:33:51.389 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-10T08:33:51.389 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-10T08:33:51.389 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-10T08:33:51.389 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-10T08:33:51.389 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.arena_block_size: 1048576 2026-03-10T08:33:51.389 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-10T08:33:51.389 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-10T08:33:51.389 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.disable_auto_compactions: 0 2026-03-10T08:33:51.389 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-10T08:33:51.389 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-10T08:33:51.389 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-10T08:33:51.389 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-10T08:33:51.389 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-10T08:33:51.389 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-10T08:33:51.389 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-10T08:33:51.389 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-10T08:33:51.389 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-10T08:33:51.389 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-10T08:33:51.389 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-10T08:33:51.389 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.inplace_update_support: 0 2026-03-10T08:33:51.389 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.inplace_update_num_locks: 10000 2026-03-10T08:33:51.389 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-10T08:33:51.389 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-10T08:33:51.389 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.memtable_huge_page_size: 0 2026-03-10T08:33:51.389 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.bloom_locality: 0 2026-03-10T08:33:51.389 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.max_successive_merges: 0 2026-03-10T08:33:51.389 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.optimize_filters_for_hits: 0 2026-03-10T08:33:51.389 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.paranoid_file_checks: 0 2026-03-10T08:33:51.389 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.force_consistency_checks: 1 2026-03-10T08:33:51.389 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.report_bg_io_stats: 0 2026-03-10T08:33:51.389 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.ttl: 2592000 2026-03-10T08:33:51.389 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.periodic_compaction_seconds: 0 2026-03-10T08:33:51.389 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-10T08:33:51.389 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-10T08:33:51.389 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.enable_blob_files: false 2026-03-10T08:33:51.389 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.min_blob_size: 0 2026-03-10T08:33:51.389 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.blob_file_size: 268435456 2026-03-10T08:33:51.390 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.blob_compression_type: NoCompression 2026-03-10T08:33:51.390 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.enable_blob_garbage_collection: false 2026-03-10T08:33:51.390 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-10T08:33:51.390 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-10T08:33:51.390 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-10T08:33:51.390 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.blob_file_starting_level: 0 2026-03-10T08:33:51.390 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-10T08:33:51.390 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5 2026-03-10T08:33:51.390 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 2026-03-10T08:33:51.390 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 872268ad-7f4b-456c-a619-11135c882be6 2026-03-10T08:33:51.390 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773131631306918, "job": 1, "event": "recovery_started", "wal_files": [9]} 2026-03-10T08:33:51.390 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2 2026-03-10T08:33:51.390 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773131631313717, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 72929, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 229, "table_properties": {"data_size": 71208, "index_size": 174, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 517, "raw_key_size": 9915, "raw_average_key_size": 49, "raw_value_size": 65622, "raw_average_value_size": 328, "num_data_blocks": 8, "num_entries": 200, "num_filter_entries": 200, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773131631, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "872268ad-7f4b-456c-a619-11135c882be6", "db_session_id": "7JSGWKC347HYZ44BFMYK", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}} 2026-03-10T08:33:51.390 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 bash[50689]: 8042a210ce6ff0acc9683abf0fee51f83521f4c4c12e079392cda11b71572ef4 2026-03-10T08:33:51.390 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773131631315501, "job": 1, "event": "recovery_finished"} 2026-03-10T08:33:51.390 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: [db/version_set.cc:5047] Creating manifest 15 2026-03-10T08:33:51.390 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 systemd[1]: Started Ceph mon.a for aaf0329a-1c5b-11f1-8b6f-7f2d819bb543. 2026-03-10T08:33:51.390 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-a/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-10T08:33:51.390 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x560a2236ee00 2026-03-10T08:33:51.390 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: DB pointer 0x560a22478000 2026-03-10T08:33:51.390 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-10T08:33:51.390 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-10T08:33:51.390 INFO:journalctl@ceph.mon.a.vm03.stdout: ** DB Stats ** 2026-03-10T08:33:51.390 INFO:journalctl@ceph.mon.a.vm03.stdout: Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T08:33:51.390 INFO:journalctl@ceph.mon.a.vm03.stdout: Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-10T08:33:51.390 INFO:journalctl@ceph.mon.a.vm03.stdout: Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T08:33:51.390 INFO:journalctl@ceph.mon.a.vm03.stdout: Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T08:33:51.390 INFO:journalctl@ceph.mon.a.vm03.stdout: Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-10T08:33:51.390 INFO:journalctl@ceph.mon.a.vm03.stdout: Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T08:33:51.390 INFO:journalctl@ceph.mon.a.vm03.stdout: Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T08:33:51.390 INFO:journalctl@ceph.mon.a.vm03.stdout: 2026-03-10T08:33:51.390 INFO:journalctl@ceph.mon.a.vm03.stdout: ** Compaction Stats [default] ** 2026-03-10T08:33:51.390 INFO:journalctl@ceph.mon.a.vm03.stdout: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T08:33:51.390 INFO:journalctl@ceph.mon.a.vm03.stdout: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-10T08:33:51.390 INFO:journalctl@ceph.mon.a.vm03.stdout: L0 2/0 73.04 KB 0.5 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 10.6 0.01 0.00 1 0.007 0 0 0.0 0.0 2026-03-10T08:33:51.390 INFO:journalctl@ceph.mon.a.vm03.stdout: Sum 2/0 73.04 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 10.6 0.01 0.00 1 0.007 0 0 0.0 0.0 2026-03-10T08:33:51.390 INFO:journalctl@ceph.mon.a.vm03.stdout: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 10.6 0.01 0.00 1 0.007 0 0 0.0 0.0 2026-03-10T08:33:51.390 INFO:journalctl@ceph.mon.a.vm03.stdout: 2026-03-10T08:33:51.390 INFO:journalctl@ceph.mon.a.vm03.stdout: ** Compaction Stats [default] ** 2026-03-10T08:33:51.390 INFO:journalctl@ceph.mon.a.vm03.stdout: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T08:33:51.390 INFO:journalctl@ceph.mon.a.vm03.stdout: --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-10T08:33:51.390 INFO:journalctl@ceph.mon.a.vm03.stdout: User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 10.6 0.01 0.00 1 0.007 0 0 0.0 0.0 2026-03-10T08:33:51.390 INFO:journalctl@ceph.mon.a.vm03.stdout: 2026-03-10T08:33:51.390 INFO:journalctl@ceph.mon.a.vm03.stdout: Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-10T08:33:51.390 INFO:journalctl@ceph.mon.a.vm03.stdout: 2026-03-10T08:33:51.390 INFO:journalctl@ceph.mon.a.vm03.stdout: Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T08:33:51.390 INFO:journalctl@ceph.mon.a.vm03.stdout: Flush(GB): cumulative 0.000, interval 0.000 2026-03-10T08:33:51.390 INFO:journalctl@ceph.mon.a.vm03.stdout: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-10T08:33:51.390 INFO:journalctl@ceph.mon.a.vm03.stdout: AddFile(Total Files): cumulative 0, interval 0 2026-03-10T08:33:51.390 INFO:journalctl@ceph.mon.a.vm03.stdout: AddFile(L0 Files): cumulative 0, interval 0 2026-03-10T08:33:51.390 INFO:journalctl@ceph.mon.a.vm03.stdout: AddFile(Keys): cumulative 0, interval 0 2026-03-10T08:33:51.390 INFO:journalctl@ceph.mon.a.vm03.stdout: Cumulative compaction: 0.00 GB write, 2.08 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T08:33:51.390 INFO:journalctl@ceph.mon.a.vm03.stdout: Interval compaction: 0.00 GB write, 2.08 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T08:33:51.390 INFO:journalctl@ceph.mon.a.vm03.stdout: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-10T08:33:51.390 INFO:journalctl@ceph.mon.a.vm03.stdout: Block cache BinnedLRUCache@0x560a2236d350#2 capacity: 512.00 MB usage: 1.06 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.5e-05 secs_since: 0 2026-03-10T08:33:51.390 INFO:journalctl@ceph.mon.a.vm03.stdout: Block cache entry stats(count,size,portion): FilterBlock(2,0.70 KB,0.00013411%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%) 2026-03-10T08:33:51.390 INFO:journalctl@ceph.mon.a.vm03.stdout: 2026-03-10T08:33:51.390 INFO:journalctl@ceph.mon.a.vm03.stdout: ** File Read Latency Histogram By Level [default] ** 2026-03-10T08:33:51.390 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: starting mon.a rank 0 at public addrs v1:192.168.123.103:6789/0 at bind addrs v1:192.168.123.103:6789/0 mon_data /var/lib/ceph/mon/ceph-a fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 2026-03-10T08:33:51.390 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: mon.a@-1(???) e1 preinit fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 2026-03-10T08:33:51.390 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: mon.a@-1(???).mds e1 new map 2026-03-10T08:33:51.391 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: mon.a@-1(???).mds e1 print_map 2026-03-10T08:33:51.391 INFO:journalctl@ceph.mon.a.vm03.stdout: e1 2026-03-10T08:33:51.391 INFO:journalctl@ceph.mon.a.vm03.stdout: btime 2026-03-10T08:33:50:170504+0000 2026-03-10T08:33:51.391 INFO:journalctl@ceph.mon.a.vm03.stdout: enable_multiple, ever_enabled_multiple: 1,1 2026-03-10T08:33:51.391 INFO:journalctl@ceph.mon.a.vm03.stdout: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes} 2026-03-10T08:33:51.391 INFO:journalctl@ceph.mon.a.vm03.stdout: legacy client fscid: -1 2026-03-10T08:33:51.391 INFO:journalctl@ceph.mon.a.vm03.stdout: 2026-03-10T08:33:51.391 INFO:journalctl@ceph.mon.a.vm03.stdout: No filesystems configured 2026-03-10T08:33:51.391 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: mon.a@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires 2026-03-10T08:33:51.391 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T08:33:51.391 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T08:33:51.391 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:51 vm03 ceph-mon[50703]: mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T08:33:51.516 INFO:teuthology.orchestra.run.vm03.stdout:Wrote config to /etc/ceph/ceph.conf 2026-03-10T08:33:51.516 INFO:teuthology.orchestra.run.vm03.stdout:Wrote keyring to /etc/ceph/ceph.client.admin.keyring 2026-03-10T08:33:51.516 INFO:teuthology.orchestra.run.vm03.stdout:Creating mgr... 2026-03-10T08:33:51.517 INFO:teuthology.orchestra.run.vm03.stdout:Verifying port 0.0.0.0:9283 ... 2026-03-10T08:33:51.517 INFO:teuthology.orchestra.run.vm03.stdout:Verifying port 0.0.0.0:8765 ... 2026-03-10T08:33:51.682 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@mgr.y 2026-03-10T08:33:51.682 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Failed to reset failed state of unit ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@mgr.y.service: Unit ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@mgr.y.service not loaded. 2026-03-10T08:33:51.811 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543.target.wants/ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@mgr.y.service → /etc/systemd/system/ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@.service. 2026-03-10T08:33:51.928 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:33:51 vm03 systemd[1]: Starting Ceph mgr.y for aaf0329a-1c5b-11f1-8b6f-7f2d819bb543... 2026-03-10T08:33:51.992 INFO:teuthology.orchestra.run.vm03.stdout:firewalld does not appear to be present 2026-03-10T08:33:51.992 INFO:teuthology.orchestra.run.vm03.stdout:Not possible to enable service . firewalld.service is not available 2026-03-10T08:33:51.992 INFO:teuthology.orchestra.run.vm03.stdout:firewalld does not appear to be present 2026-03-10T08:33:51.992 INFO:teuthology.orchestra.run.vm03.stdout:Not possible to open ports <[9283, 8765]>. firewalld.service is not available 2026-03-10T08:33:51.992 INFO:teuthology.orchestra.run.vm03.stdout:Waiting for mgr to start... 2026-03-10T08:33:51.992 INFO:teuthology.orchestra.run.vm03.stdout:Waiting for mgr... 2026-03-10T08:33:52.214 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:33:51 vm03 podman[50899]: 2026-03-10 08:33:51.941648765 +0000 UTC m=+0.018162445 container create ae10c127f343b6b0b9f3097117867ccd718916b00227eb441b79117e9f035d8f (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-10T08:33:52.214 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:33:51 vm03 podman[50899]: 2026-03-10 08:33:51.978375654 +0000 UTC m=+0.054889344 container init ae10c127f343b6b0b9f3097117867ccd718916b00227eb441b79117e9f035d8f (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, org.label-schema.license=GPLv2) 2026-03-10T08:33:52.214 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:33:51 vm03 podman[50899]: 2026-03-10 08:33:51.982107456 +0000 UTC m=+0.058621136 container start ae10c127f343b6b0b9f3097117867ccd718916b00227eb441b79117e9f035d8f (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) 2026-03-10T08:33:52.214 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:33:51 vm03 bash[50899]: ae10c127f343b6b0b9f3097117867ccd718916b00227eb441b79117e9f035d8f 2026-03-10T08:33:52.214 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:33:51 vm03 podman[50899]: 2026-03-10 08:33:51.934537788 +0000 UTC m=+0.011051479 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T08:33:52.214 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:33:51 vm03 systemd[1]: Started Ceph mgr.y for aaf0329a-1c5b-11f1-8b6f-7f2d819bb543. 2026-03-10T08:33:52.214 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:33:52 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:33:52.087+0000 7fba5ce61140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T08:33:52.214 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:33:52 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:33:52.136+0000 7fba5ce61140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T08:33:52.245 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-10T08:33:52.245 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout { 2026-03-10T08:33:52.245 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "fsid": "aaf0329a-1c5b-11f1-8b6f-7f2d819bb543", 2026-03-10T08:33:52.245 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "health": { 2026-03-10T08:33:52.246 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-10T08:33:52.246 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-10T08:33:52.246 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-10T08:33:52.246 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-10T08:33:52.246 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-10T08:33:52.246 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-10T08:33:52.246 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 0 2026-03-10T08:33:52.246 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ], 2026-03-10T08:33:52.246 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-10T08:33:52.246 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "a" 2026-03-10T08:33:52.246 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ], 2026-03-10T08:33:52.246 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "quorum_age": 0, 2026-03-10T08:33:52.246 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-10T08:33:52.246 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T08:33:52.246 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-10T08:33:52.246 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-10T08:33:52.246 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-10T08:33:52.246 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-10T08:33:52.246 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T08:33:52.246 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-10T08:33:52.246 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-10T08:33:52.246 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-10T08:33:52.246 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-10T08:33:52.246 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-10T08:33:52.246 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-10T08:33:52.246 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-10T08:33:52.246 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-10T08:33:52.246 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-10T08:33:52.246 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-10T08:33:52.246 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-10T08:33:52.246 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-10T08:33:52.246 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-10T08:33:52.246 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-10T08:33:52.246 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-10T08:33:52.246 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-10T08:33:52.246 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-10T08:33:52.246 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-10T08:33:52.246 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T08:33:52.246 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "btime": "2026-03-10T08:33:50:170504+0000", 2026-03-10T08:33:52.246 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-10T08:33:52.247 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-10T08:33:52.247 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-10T08:33:52.247 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-10T08:33:52.247 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-10T08:33:52.247 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-10T08:33:52.247 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-10T08:33:52.247 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-10T08:33:52.247 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-10T08:33:52.247 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "restful" 2026-03-10T08:33:52.247 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ], 2026-03-10T08:33:52.247 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T08:33:52.247 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-10T08:33:52.247 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-10T08:33:52.247 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T08:33:52.247 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "modified": "2026-03-10T08:33:50.171228+0000", 2026-03-10T08:33:52.247 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T08:33:52.247 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-10T08:33:52.247 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-10T08:33:52.247 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout } 2026-03-10T08:33:52.247 INFO:teuthology.orchestra.run.vm03.stdout:mgr not available, waiting (1/15)... 2026-03-10T08:33:52.591 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:52 vm03 ceph-mon[50703]: mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T08:33:52.591 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:52 vm03 ceph-mon[50703]: monmap epoch 1 2026-03-10T08:33:52.591 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:52 vm03 ceph-mon[50703]: fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 2026-03-10T08:33:52.591 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:52 vm03 ceph-mon[50703]: last_changed 2026-03-10T08:33:49.085668+0000 2026-03-10T08:33:52.591 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:52 vm03 ceph-mon[50703]: created 2026-03-10T08:33:49.085668+0000 2026-03-10T08:33:52.591 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:52 vm03 ceph-mon[50703]: min_mon_release 19 (squid) 2026-03-10T08:33:52.591 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:52 vm03 ceph-mon[50703]: election_strategy: 1 2026-03-10T08:33:52.591 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:52 vm03 ceph-mon[50703]: 0: v1:192.168.123.103:6789/0 mon.a 2026-03-10T08:33:52.591 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:52 vm03 ceph-mon[50703]: fsmap 2026-03-10T08:33:52.591 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:52 vm03 ceph-mon[50703]: osdmap e1: 0 total, 0 up, 0 in 2026-03-10T08:33:52.592 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:52 vm03 ceph-mon[50703]: mgrmap e1: no daemons active 2026-03-10T08:33:52.592 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:52 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/2093845549' entity='client.admin' 2026-03-10T08:33:52.592 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:52 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/1251670602' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T08:33:52.928 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:33:52 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:33:52.589+0000 7fba5ce61140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T08:33:53.428 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:33:52 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:33:52.942+0000 7fba5ce61140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T08:33:53.428 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:33:53 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T08:33:53.428 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:33:53 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T08:33:53.428 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:33:53 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: from numpy import show_config as show_numpy_config 2026-03-10T08:33:53.428 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:33:53 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:33:53.033+0000 7fba5ce61140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T08:33:53.428 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:33:53 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:33:53.071+0000 7fba5ce61140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T08:33:53.428 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:33:53 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:33:53.146+0000 7fba5ce61140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T08:33:53.928 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:33:53 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:33:53.664+0000 7fba5ce61140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T08:33:53.928 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:33:53 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:33:53.775+0000 7fba5ce61140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T08:33:53.928 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:33:53 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:33:53.815+0000 7fba5ce61140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T08:33:53.928 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:33:53 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:33:53.850+0000 7fba5ce61140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T08:33:53.928 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:33:53 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:33:53.891+0000 7fba5ce61140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T08:33:54.282 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:33:53 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:33:53.928+0000 7fba5ce61140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T08:33:54.282 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:33:54 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:33:54.101+0000 7fba5ce61140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T08:33:54.282 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:33:54 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:33:54.150+0000 7fba5ce61140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T08:33:54.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-10T08:33:54.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout { 2026-03-10T08:33:54.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "fsid": "aaf0329a-1c5b-11f1-8b6f-7f2d819bb543", 2026-03-10T08:33:54.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "health": { 2026-03-10T08:33:54.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-10T08:33:54.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-10T08:33:54.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-10T08:33:54.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-10T08:33:54.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-10T08:33:54.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-10T08:33:54.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 0 2026-03-10T08:33:54.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ], 2026-03-10T08:33:54.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-10T08:33:54.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "a" 2026-03-10T08:33:54.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ], 2026-03-10T08:33:54.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "quorum_age": 3, 2026-03-10T08:33:54.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-10T08:33:54.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T08:33:54.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-10T08:33:54.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-10T08:33:54.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-10T08:33:54.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-10T08:33:54.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T08:33:54.458 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-10T08:33:54.458 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-10T08:33:54.458 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-10T08:33:54.458 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-10T08:33:54.458 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-10T08:33:54.458 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-10T08:33:54.458 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-10T08:33:54.458 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-10T08:33:54.458 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-10T08:33:54.458 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-10T08:33:54.458 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-10T08:33:54.458 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-10T08:33:54.458 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-10T08:33:54.458 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-10T08:33:54.458 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-10T08:33:54.458 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-10T08:33:54.458 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-10T08:33:54.458 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-10T08:33:54.458 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T08:33:54.458 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "btime": "2026-03-10T08:33:50:170504+0000", 2026-03-10T08:33:54.458 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-10T08:33:54.458 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-10T08:33:54.458 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-10T08:33:54.458 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-10T08:33:54.458 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-10T08:33:54.458 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-10T08:33:54.458 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-10T08:33:54.458 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-10T08:33:54.458 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-10T08:33:54.458 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "restful" 2026-03-10T08:33:54.458 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ], 2026-03-10T08:33:54.458 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T08:33:54.458 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-10T08:33:54.458 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-10T08:33:54.458 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T08:33:54.458 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "modified": "2026-03-10T08:33:50.171228+0000", 2026-03-10T08:33:54.458 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T08:33:54.458 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-10T08:33:54.458 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-10T08:33:54.458 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout } 2026-03-10T08:33:54.458 INFO:teuthology.orchestra.run.vm03.stdout:mgr not available, waiting (2/15)... 2026-03-10T08:33:54.671 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:54 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/1371355316' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T08:33:54.671 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:33:54 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:33:54.390+0000 7fba5ce61140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T08:33:54.671 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:33:54 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:33:54.667+0000 7fba5ce61140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T08:33:54.928 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:33:54 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:33:54.702+0000 7fba5ce61140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T08:33:54.928 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:33:54 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:33:54.744+0000 7fba5ce61140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T08:33:54.928 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:33:54 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:33:54.821+0000 7fba5ce61140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T08:33:54.928 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:33:54 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:33:54.857+0000 7fba5ce61140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T08:33:55.195 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:33:54 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:33:54.935+0000 7fba5ce61140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T08:33:55.195 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:33:55 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:33:55.052+0000 7fba5ce61140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T08:33:55.455 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:33:55 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:33:55.194+0000 7fba5ce61140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T08:33:55.455 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:33:55 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:33:55.229+0000 7fba5ce61140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T08:33:55.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:55 vm03 ceph-mon[50703]: Activating manager daemon y 2026-03-10T08:33:55.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:55 vm03 ceph-mon[50703]: mgrmap e2: y(active, starting, since 0.00360955s) 2026-03-10T08:33:55.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:55 vm03 ceph-mon[50703]: from='mgr.14100 v1:192.168.123.103:0/3325065657' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T08:33:55.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:55 vm03 ceph-mon[50703]: from='mgr.14100 v1:192.168.123.103:0/3325065657' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T08:33:55.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:55 vm03 ceph-mon[50703]: from='mgr.14100 v1:192.168.123.103:0/3325065657' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T08:33:55.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:55 vm03 ceph-mon[50703]: from='mgr.14100 v1:192.168.123.103:0/3325065657' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T08:33:55.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:55 vm03 ceph-mon[50703]: from='mgr.14100 v1:192.168.123.103:0/3325065657' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T08:33:55.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:55 vm03 ceph-mon[50703]: Manager daemon y is now available 2026-03-10T08:33:55.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:55 vm03 ceph-mon[50703]: from='mgr.14100 v1:192.168.123.103:0/3325065657' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T08:33:55.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:55 vm03 ceph-mon[50703]: from='mgr.14100 v1:192.168.123.103:0/3325065657' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T08:33:55.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:55 vm03 ceph-mon[50703]: from='mgr.14100 v1:192.168.123.103:0/3325065657' entity='mgr.y' 2026-03-10T08:33:55.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:55 vm03 ceph-mon[50703]: from='mgr.14100 v1:192.168.123.103:0/3325065657' entity='mgr.y' 2026-03-10T08:33:55.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:55 vm03 ceph-mon[50703]: from='mgr.14100 v1:192.168.123.103:0/3325065657' entity='mgr.y' 2026-03-10T08:33:56.726 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-10T08:33:56.726 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout { 2026-03-10T08:33:56.726 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "fsid": "aaf0329a-1c5b-11f1-8b6f-7f2d819bb543", 2026-03-10T08:33:56.726 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "health": { 2026-03-10T08:33:56.726 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-10T08:33:56.726 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-10T08:33:56.726 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-10T08:33:56.726 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-10T08:33:56.726 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-10T08:33:56.726 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-10T08:33:56.726 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 0 2026-03-10T08:33:56.726 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ], 2026-03-10T08:33:56.726 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-10T08:33:56.726 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "a" 2026-03-10T08:33:56.726 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ], 2026-03-10T08:33:56.726 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "quorum_age": 5, 2026-03-10T08:33:56.726 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-10T08:33:56.726 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T08:33:56.726 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-10T08:33:56.726 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-10T08:33:56.726 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-10T08:33:56.726 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-10T08:33:56.726 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T08:33:56.726 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-10T08:33:56.726 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-10T08:33:56.726 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-10T08:33:56.726 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-10T08:33:56.726 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-10T08:33:56.726 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-10T08:33:56.726 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-10T08:33:56.726 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-10T08:33:56.726 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-10T08:33:56.726 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-10T08:33:56.726 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-10T08:33:56.726 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-10T08:33:56.726 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-10T08:33:56.727 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-10T08:33:56.727 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-10T08:33:56.727 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-10T08:33:56.727 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-10T08:33:56.727 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-10T08:33:56.727 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T08:33:56.727 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "btime": "2026-03-10T08:33:50:170504+0000", 2026-03-10T08:33:56.727 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-10T08:33:56.727 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-10T08:33:56.727 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-10T08:33:56.727 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-10T08:33:56.727 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-10T08:33:56.727 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-10T08:33:56.727 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-10T08:33:56.727 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-10T08:33:56.727 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-10T08:33:56.727 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "restful" 2026-03-10T08:33:56.727 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ], 2026-03-10T08:33:56.727 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T08:33:56.727 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-10T08:33:56.727 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-10T08:33:56.727 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T08:33:56.727 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "modified": "2026-03-10T08:33:50.171228+0000", 2026-03-10T08:33:56.727 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T08:33:56.727 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-10T08:33:56.727 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-10T08:33:56.727 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout } 2026-03-10T08:33:56.727 INFO:teuthology.orchestra.run.vm03.stdout:mgr is available 2026-03-10T08:33:57.128 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-10T08:33:57.128 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout [global] 2026-03-10T08:33:57.128 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout fsid = aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 2026-03-10T08:33:57.128 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-10T08:33:57.128 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon_host = [v1:192.168.123.103:6789] 2026-03-10T08:33:57.128 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-10T08:33:57.128 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-10T08:33:57.128 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-10T08:33:57.128 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-10T08:33:57.128 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-10T08:33:57.128 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-10T08:33:57.128 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-10T08:33:57.128 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-10T08:33:57.128 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout [osd] 2026-03-10T08:33:57.128 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-10T08:33:57.128 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-10T08:33:57.128 INFO:teuthology.orchestra.run.vm03.stdout:Enabling cephadm module... 2026-03-10T08:33:57.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:57 vm03 ceph-mon[50703]: mgrmap e3: y(active, since 1.00917s) 2026-03-10T08:33:57.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:57 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/3730691492' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T08:33:57.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:57 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/2495825816' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-10T08:33:58.529 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:58 vm03 ceph-mon[50703]: mgrmap e4: y(active, since 2s) 2026-03-10T08:33:58.529 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:58 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/3731099654' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-10T08:33:58.529 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:33:58 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: ignoring --setuser ceph since I am not root 2026-03-10T08:33:58.529 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:33:58 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: ignoring --setgroup ceph since I am not root 2026-03-10T08:33:58.529 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:33:58 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:33:58.381+0000 7fef8d22e140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T08:33:58.529 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:33:58 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:33:58.420+0000 7fef8d22e140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T08:33:58.557 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout { 2026-03-10T08:33:58.557 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 5, 2026-03-10T08:33:58.557 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-10T08:33:58.557 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "active_name": "y", 2026-03-10T08:33:58.557 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-10T08:33:58.557 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout } 2026-03-10T08:33:58.557 INFO:teuthology.orchestra.run.vm03.stdout:Waiting for the mgr to restart... 2026-03-10T08:33:58.557 INFO:teuthology.orchestra.run.vm03.stdout:Waiting for mgr epoch 5... 2026-03-10T08:33:58.838 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:33:58 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:33:58.835+0000 7fef8d22e140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T08:33:59.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:59 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/3731099654' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-10T08:33:59.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:59 vm03 ceph-mon[50703]: mgrmap e5: y(active, since 3s) 2026-03-10T08:33:59.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:33:59 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/3139609863' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T08:33:59.428 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:33:59 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:33:59.167+0000 7fef8d22e140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T08:33:59.428 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:33:59 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T08:33:59.428 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:33:59 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T08:33:59.428 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:33:59 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: from numpy import show_config as show_numpy_config 2026-03-10T08:33:59.428 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:33:59 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:33:59.254+0000 7fef8d22e140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T08:33:59.428 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:33:59 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:33:59.293+0000 7fef8d22e140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T08:33:59.428 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:33:59 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:33:59.361+0000 7fef8d22e140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T08:34:00.107 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:33:59 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:33:59.852+0000 7fef8d22e140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T08:34:00.107 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:33:59 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:33:59.959+0000 7fef8d22e140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T08:34:00.107 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:33:59 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:33:59.997+0000 7fef8d22e140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T08:34:00.107 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:34:00 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:34:00.031+0000 7fef8d22e140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T08:34:00.107 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:34:00 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:34:00.070+0000 7fef8d22e140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T08:34:00.428 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:34:00 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:34:00.105+0000 7fef8d22e140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T08:34:00.428 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:34:00 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:34:00.274+0000 7fef8d22e140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T08:34:00.428 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:34:00 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:34:00.323+0000 7fef8d22e140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T08:34:00.795 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:34:00 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:34:00.531+0000 7fef8d22e140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T08:34:01.139 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:34:00 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:34:00.793+0000 7fef8d22e140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T08:34:01.140 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:34:00 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:34:00.826+0000 7fef8d22e140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T08:34:01.140 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:34:00 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:34:00.864+0000 7fef8d22e140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T08:34:01.140 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:34:00 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:34:00.935+0000 7fef8d22e140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T08:34:01.140 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:34:00 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:34:00.968+0000 7fef8d22e140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T08:34:01.140 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:34:01 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:34:01.038+0000 7fef8d22e140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T08:34:01.428 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:34:01 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:34:01.138+0000 7fef8d22e140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T08:34:01.428 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:34:01 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:34:01.260+0000 7fef8d22e140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T08:34:01.428 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:34:01 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:34:01.293+0000 7fef8d22e140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T08:34:01.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:01 vm03 ceph-mon[50703]: Active manager daemon y restarted 2026-03-10T08:34:01.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:01 vm03 ceph-mon[50703]: Activating manager daemon y 2026-03-10T08:34:01.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:01 vm03 ceph-mon[50703]: osdmap e2: 0 total, 0 up, 0 in 2026-03-10T08:34:01.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:01 vm03 ceph-mon[50703]: mgrmap e6: y(active, starting, since 0.0058695s) 2026-03-10T08:34:01.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:01 vm03 ceph-mon[50703]: from='mgr.14118 v1:192.168.123.103:0/1968097009' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T08:34:01.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:01 vm03 ceph-mon[50703]: from='mgr.14118 v1:192.168.123.103:0/1968097009' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T08:34:01.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:01 vm03 ceph-mon[50703]: from='mgr.14118 v1:192.168.123.103:0/1968097009' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T08:34:01.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:01 vm03 ceph-mon[50703]: from='mgr.14118 v1:192.168.123.103:0/1968097009' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T08:34:01.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:01 vm03 ceph-mon[50703]: from='mgr.14118 v1:192.168.123.103:0/1968097009' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T08:34:01.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:01 vm03 ceph-mon[50703]: Manager daemon y is now available 2026-03-10T08:34:01.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:01 vm03 ceph-mon[50703]: from='mgr.14118 v1:192.168.123.103:0/1968097009' entity='mgr.y' 2026-03-10T08:34:01.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:01 vm03 ceph-mon[50703]: from='mgr.14118 v1:192.168.123.103:0/1968097009' entity='mgr.y' 2026-03-10T08:34:01.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:01 vm03 ceph-mon[50703]: from='mgr.14118 v1:192.168.123.103:0/1968097009' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:34:01.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:01 vm03 ceph-mon[50703]: from='mgr.14118 v1:192.168.123.103:0/1968097009' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:34:01.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:01 vm03 ceph-mon[50703]: from='mgr.14118 v1:192.168.123.103:0/1968097009' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T08:34:01.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:01 vm03 ceph-mon[50703]: from='mgr.14118 v1:192.168.123.103:0/1968097009' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T08:34:02.347 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout { 2026-03-10T08:34:02.347 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 7, 2026-03-10T08:34:02.347 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-10T08:34:02.347 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout } 2026-03-10T08:34:02.347 INFO:teuthology.orchestra.run.vm03.stdout:mgr epoch 5 is available 2026-03-10T08:34:02.347 INFO:teuthology.orchestra.run.vm03.stdout:Setting orchestrator backend to cephadm... 2026-03-10T08:34:02.797 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout value unchanged 2026-03-10T08:34:02.797 INFO:teuthology.orchestra.run.vm03.stdout:Generating ssh key... 2026-03-10T08:34:02.821 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:02 vm03 ceph-mon[50703]: Found migration_current of "None". Setting to last migration. 2026-03-10T08:34:02.821 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:02 vm03 ceph-mon[50703]: from='mgr.14118 v1:192.168.123.103:0/1968097009' entity='mgr.y' 2026-03-10T08:34:02.821 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:02 vm03 ceph-mon[50703]: from='mgr.14118 v1:192.168.123.103:0/1968097009' entity='mgr.y' 2026-03-10T08:34:02.821 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:02 vm03 ceph-mon[50703]: [10/Mar/2026:08:34:02] ENGINE Bus STARTING 2026-03-10T08:34:02.821 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:02 vm03 ceph-mon[50703]: [10/Mar/2026:08:34:02] ENGINE Serving on http://192.168.123.103:8765 2026-03-10T08:34:02.821 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:02 vm03 ceph-mon[50703]: mgrmap e7: y(active, since 1.00937s) 2026-03-10T08:34:02.821 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:02 vm03 ceph-mon[50703]: from='client.14122 v1:192.168.123.103:0/3926870975' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T08:34:02.821 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:02 vm03 ceph-mon[50703]: from='mgr.14118 v1:192.168.123.103:0/1968097009' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:34:02.821 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:02 vm03 ceph-mon[50703]: from='mgr.14118 v1:192.168.123.103:0/1968097009' entity='mgr.y' 2026-03-10T08:34:02.821 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:02 vm03 ceph-mon[50703]: from='mgr.14118 v1:192.168.123.103:0/1968097009' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:34:03.073 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:34:03 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: Generating public/private ed25519 key pair. 2026-03-10T08:34:03.073 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:34:03 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: Your identification has been saved in /tmp/tmp0dwphiqz/key 2026-03-10T08:34:03.073 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:34:03 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: Your public key has been saved in /tmp/tmp0dwphiqz/key.pub 2026-03-10T08:34:03.073 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:34:03 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: The key fingerprint is: 2026-03-10T08:34:03.073 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:34:03 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: SHA256:TENJBn9diPQ8GJppOuDlVGhISGUXVALnX+X35zSsrGQ ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 2026-03-10T08:34:03.073 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:34:03 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: The key's randomart image is: 2026-03-10T08:34:03.073 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:34:03 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: +--[ED25519 256]--+ 2026-03-10T08:34:03.073 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:34:03 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: | ..+=+B**oo.... | 2026-03-10T08:34:03.073 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:34:03 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: | ...+o*.+.O.. | 2026-03-10T08:34:03.073 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:34:03 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: | ..+ O + * . | 2026-03-10T08:34:03.073 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:34:03 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: | . = * + o.. | 2026-03-10T08:34:03.073 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:34:03 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: | . + S o+| 2026-03-10T08:34:03.073 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:34:03 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: | . . .oo| 2026-03-10T08:34:03.073 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:34:03 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: | E o .| 2026-03-10T08:34:03.073 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:34:03 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: | o . | 2026-03-10T08:34:03.073 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:34:03 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: | . | 2026-03-10T08:34:03.073 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:34:03 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: +----[SHA256]-----+ 2026-03-10T08:34:03.246 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBuXPSpv1jRKlP0/9fO0gGOirooMs2vq663KoB8q2p2A ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 2026-03-10T08:34:03.246 INFO:teuthology.orchestra.run.vm03.stdout:Wrote public SSH key to /home/ubuntu/cephtest/ceph.pub 2026-03-10T08:34:03.246 INFO:teuthology.orchestra.run.vm03.stdout:Adding key to root@localhost authorized_keys... 2026-03-10T08:34:03.247 INFO:teuthology.orchestra.run.vm03.stdout:Adding host vm03... 2026-03-10T08:34:04.147 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:04 vm03 ceph-mon[50703]: from='client.14122 v1:192.168.123.103:0/3926870975' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T08:34:04.148 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:04 vm03 ceph-mon[50703]: [10/Mar/2026:08:34:02] ENGINE Serving on https://192.168.123.103:7150 2026-03-10T08:34:04.148 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:04 vm03 ceph-mon[50703]: [10/Mar/2026:08:34:02] ENGINE Bus STARTED 2026-03-10T08:34:04.148 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:04 vm03 ceph-mon[50703]: [10/Mar/2026:08:34:02] ENGINE Client ('192.168.123.103', 37520) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T08:34:04.148 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:04 vm03 ceph-mon[50703]: from='client.14130 v1:192.168.123.103:0/4060812142' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:34:04.148 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:04 vm03 ceph-mon[50703]: from='client.14132 v1:192.168.123.103:0/2253738819' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:34:04.148 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:04 vm03 ceph-mon[50703]: from='client.14134 v1:192.168.123.103:0/4153293553' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:34:04.148 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:04 vm03 ceph-mon[50703]: Generating ssh key... 2026-03-10T08:34:04.148 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:04 vm03 ceph-mon[50703]: from='mgr.14118 v1:192.168.123.103:0/1968097009' entity='mgr.y' 2026-03-10T08:34:04.148 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:04 vm03 ceph-mon[50703]: from='mgr.14118 v1:192.168.123.103:0/1968097009' entity='mgr.y' 2026-03-10T08:34:04.148 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:04 vm03 ceph-mon[50703]: from='client.14136 v1:192.168.123.103:0/1942120639' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:34:04.922 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout Added host 'vm03' with addr '192.168.123.103' 2026-03-10T08:34:04.922 INFO:teuthology.orchestra.run.vm03.stdout:Deploying unmanaged mon service... 2026-03-10T08:34:05.175 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:05 vm03 ceph-mon[50703]: from='client.14138 v1:192.168.123.103:0/517286996' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm03", "addr": "192.168.123.103", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:34:05.176 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:05 vm03 ceph-mon[50703]: Deploying cephadm binary to vm03 2026-03-10T08:34:05.176 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:05 vm03 ceph-mon[50703]: mgrmap e8: y(active, since 2s) 2026-03-10T08:34:05.176 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:05 vm03 ceph-mon[50703]: from='mgr.14118 v1:192.168.123.103:0/1968097009' entity='mgr.y' 2026-03-10T08:34:05.176 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:05 vm03 ceph-mon[50703]: from='mgr.14118 v1:192.168.123.103:0/1968097009' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:34:05.204 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout Scheduled mon update... 2026-03-10T08:34:05.204 INFO:teuthology.orchestra.run.vm03.stdout:Deploying unmanaged mgr service... 2026-03-10T08:34:05.467 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout Scheduled mgr update... 2026-03-10T08:34:06.128 INFO:teuthology.orchestra.run.vm03.stdout:Enabling the dashboard module... 2026-03-10T08:34:06.417 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:06 vm03 ceph-mon[50703]: Added host vm03 2026-03-10T08:34:06.417 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:06 vm03 ceph-mon[50703]: from='client.14140 v1:192.168.123.103:0/911318133' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:34:06.417 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:06 vm03 ceph-mon[50703]: Saving service mon spec with placement count:5 2026-03-10T08:34:06.417 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:06 vm03 ceph-mon[50703]: from='mgr.14118 v1:192.168.123.103:0/1968097009' entity='mgr.y' 2026-03-10T08:34:06.417 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:06 vm03 ceph-mon[50703]: from='mgr.14118 v1:192.168.123.103:0/1968097009' entity='mgr.y' 2026-03-10T08:34:06.417 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:06 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/4062718977' entity='client.admin' 2026-03-10T08:34:06.417 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:06 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/822515236' entity='client.admin' 2026-03-10T08:34:07.245 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:07 vm03 ceph-mon[50703]: from='client.14142 v1:192.168.123.103:0/3963200659' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:34:07.245 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:07 vm03 ceph-mon[50703]: Saving service mgr spec with placement count:2 2026-03-10T08:34:07.245 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:07 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/3341707029' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-10T08:34:07.245 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:07 vm03 ceph-mon[50703]: from='mgr.14118 v1:192.168.123.103:0/1968097009' entity='mgr.y' 2026-03-10T08:34:07.246 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:07 vm03 ceph-mon[50703]: from='mgr.14118 v1:192.168.123.103:0/1968097009' entity='mgr.y' 2026-03-10T08:34:07.246 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:07 vm03 ceph-mon[50703]: from='mgr.14118 v1:192.168.123.103:0/1968097009' entity='mgr.y' 2026-03-10T08:34:07.635 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:34:07 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: ignoring --setuser ceph since I am not root 2026-03-10T08:34:07.635 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:34:07 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: ignoring --setgroup ceph since I am not root 2026-03-10T08:34:07.635 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:34:07 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:34:07.356+0000 7faf9e286140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T08:34:07.635 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:34:07 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:34:07.403+0000 7faf9e286140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T08:34:07.669 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout { 2026-03-10T08:34:07.671 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 9, 2026-03-10T08:34:07.671 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-10T08:34:07.671 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "active_name": "y", 2026-03-10T08:34:07.671 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-10T08:34:07.671 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout } 2026-03-10T08:34:07.671 INFO:teuthology.orchestra.run.vm03.stdout:Waiting for the mgr to restart... 2026-03-10T08:34:07.671 INFO:teuthology.orchestra.run.vm03.stdout:Waiting for mgr epoch 9... 2026-03-10T08:34:07.928 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:34:07 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:34:07.839+0000 7faf9e286140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T08:34:08.428 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:34:08 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:34:08.145+0000 7faf9e286140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T08:34:08.428 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:34:08 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T08:34:08.428 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:34:08 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T08:34:08.428 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:34:08 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: from numpy import show_config as show_numpy_config 2026-03-10T08:34:08.428 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:34:08 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:34:08.227+0000 7faf9e286140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T08:34:08.428 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:34:08 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:34:08.261+0000 7faf9e286140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T08:34:08.428 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:34:08 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:34:08.330+0000 7faf9e286140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T08:34:08.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:08 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/3341707029' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-10T08:34:08.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:08 vm03 ceph-mon[50703]: mgrmap e9: y(active, since 5s) 2026-03-10T08:34:08.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:08 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/3728715497' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T08:34:09.178 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:34:08 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:34:08.791+0000 7faf9e286140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T08:34:09.178 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:34:08 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:34:08.893+0000 7faf9e286140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T08:34:09.178 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:34:08 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:34:08.929+0000 7faf9e286140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T08:34:09.178 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:34:08 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:34:08.961+0000 7faf9e286140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T08:34:09.178 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:34:09 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:34:08.999+0000 7faf9e286140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T08:34:09.178 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:34:09 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:34:09.033+0000 7faf9e286140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T08:34:09.678 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:34:09 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:34:09.189+0000 7faf9e286140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T08:34:09.678 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:34:09 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:34:09.235+0000 7faf9e286140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T08:34:09.678 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:34:09 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:34:09.436+0000 7faf9e286140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T08:34:09.960 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:34:09 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:34:09.696+0000 7faf9e286140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T08:34:09.960 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:34:09 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:34:09.730+0000 7faf9e286140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T08:34:09.960 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:34:09 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:34:09.770+0000 7faf9e286140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T08:34:09.960 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:34:09 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:34:09.844+0000 7faf9e286140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T08:34:09.960 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:34:09 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:34:09.880+0000 7faf9e286140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T08:34:10.233 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:34:09 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:34:09.958+0000 7faf9e286140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T08:34:10.233 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:34:10 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:34:10.067+0000 7faf9e286140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T08:34:10.233 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:34:10 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:34:10.197+0000 7faf9e286140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T08:34:10.642 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:10 vm03 ceph-mon[50703]: Active manager daemon y restarted 2026-03-10T08:34:10.642 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:10 vm03 ceph-mon[50703]: Activating manager daemon y 2026-03-10T08:34:10.642 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:10 vm03 ceph-mon[50703]: osdmap e3: 0 total, 0 up, 0 in 2026-03-10T08:34:10.642 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:10 vm03 ceph-mon[50703]: mgrmap e10: y(active, starting, since 0.00591906s) 2026-03-10T08:34:10.642 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:10 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T08:34:10.642 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:10 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T08:34:10.642 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:10 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T08:34:10.642 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:10 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T08:34:10.642 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:10 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T08:34:10.642 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:10 vm03 ceph-mon[50703]: Manager daemon y is now available 2026-03-10T08:34:10.642 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:10 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:34:10.642 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:10 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T08:34:10.642 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:10 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T08:34:10.642 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:34:10 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:34:10.232+0000 7faf9e286140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T08:34:11.286 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout { 2026-03-10T08:34:11.286 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 11, 2026-03-10T08:34:11.286 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-10T08:34:11.286 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout } 2026-03-10T08:34:11.286 INFO:teuthology.orchestra.run.vm03.stdout:mgr epoch 9 is available 2026-03-10T08:34:11.286 INFO:teuthology.orchestra.run.vm03.stdout:Generating a dashboard self-signed certificate... 2026-03-10T08:34:11.695 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout Self-signed certificate created 2026-03-10T08:34:11.695 INFO:teuthology.orchestra.run.vm03.stdout:Creating initial admin user... 2026-03-10T08:34:11.929 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:11 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:11.929 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:11 vm03 ceph-mon[50703]: [10/Mar/2026:08:34:10] ENGINE Bus STARTING 2026-03-10T08:34:11.929 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:11 vm03 ceph-mon[50703]: [10/Mar/2026:08:34:11] ENGINE Serving on http://192.168.123.103:8765 2026-03-10T08:34:11.929 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:11 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:11.929 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:11 vm03 ceph-mon[50703]: [10/Mar/2026:08:34:11] ENGINE Serving on https://192.168.123.103:7150 2026-03-10T08:34:11.929 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:11 vm03 ceph-mon[50703]: [10/Mar/2026:08:34:11] ENGINE Bus STARTED 2026-03-10T08:34:11.929 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:11 vm03 ceph-mon[50703]: [10/Mar/2026:08:34:11] ENGINE Client ('192.168.123.103', 37530) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T08:34:11.929 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:11 vm03 ceph-mon[50703]: mgrmap e11: y(active, since 1.00927s) 2026-03-10T08:34:11.929 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:11 vm03 ceph-mon[50703]: from='client.14154 v1:192.168.123.103:0/2670786933' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T08:34:11.929 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:11 vm03 ceph-mon[50703]: from='client.14154 v1:192.168.123.103:0/2670786933' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T08:34:11.929 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:11 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:11.929 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:11 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:12.122 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout {"username": "admin", "password": "$2b$12$Vnhg5fLmCyabMz44IzQROeOjZ3mprIsOmWvXXYvEUEUQmxbaxt5om", "roles": ["administrator"], "name": null, "email": null, "lastUpdate": 1773131652, "enabled": true, "pwdExpirationDate": null, "pwdUpdateRequired": true} 2026-03-10T08:34:12.122 INFO:teuthology.orchestra.run.vm03.stdout:Fetching dashboard port number... 2026-03-10T08:34:12.371 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 8443 2026-03-10T08:34:12.371 INFO:teuthology.orchestra.run.vm03.stdout:firewalld does not appear to be present 2026-03-10T08:34:12.371 INFO:teuthology.orchestra.run.vm03.stdout:Not possible to open ports <[8443]>. firewalld.service is not available 2026-03-10T08:34:12.373 INFO:teuthology.orchestra.run.vm03.stdout:Ceph Dashboard is now available at: 2026-03-10T08:34:12.373 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:34:12.373 INFO:teuthology.orchestra.run.vm03.stdout: URL: https://vm03.local:8443/ 2026-03-10T08:34:12.373 INFO:teuthology.orchestra.run.vm03.stdout: User: admin 2026-03-10T08:34:12.373 INFO:teuthology.orchestra.run.vm03.stdout: Password: zo2odm7x78 2026-03-10T08:34:12.373 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:34:12.373 INFO:teuthology.orchestra.run.vm03.stdout:Saving cluster configuration to /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/config directory 2026-03-10T08:34:12.654 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stderr set mgr/dashboard/cluster/status 2026-03-10T08:34:12.654 INFO:teuthology.orchestra.run.vm03.stdout:You can access the Ceph CLI as following in case of multi-cluster or non-default config: 2026-03-10T08:34:12.654 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:34:12.654 INFO:teuthology.orchestra.run.vm03.stdout: sudo /home/ubuntu/cephtest/cephadm shell --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring 2026-03-10T08:34:12.654 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:34:12.654 INFO:teuthology.orchestra.run.vm03.stdout:Or, if you are only running a single cluster on this host: 2026-03-10T08:34:12.654 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:34:12.654 INFO:teuthology.orchestra.run.vm03.stdout: sudo /home/ubuntu/cephtest/cephadm shell 2026-03-10T08:34:12.654 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:34:12.654 INFO:teuthology.orchestra.run.vm03.stdout:Please consider enabling telemetry to help improve Ceph: 2026-03-10T08:34:12.654 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:34:12.655 INFO:teuthology.orchestra.run.vm03.stdout: ceph telemetry on 2026-03-10T08:34:12.655 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:34:12.655 INFO:teuthology.orchestra.run.vm03.stdout:For more information see: 2026-03-10T08:34:12.655 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:34:12.655 INFO:teuthology.orchestra.run.vm03.stdout: https://docs.ceph.com/en/latest/mgr/telemetry/ 2026-03-10T08:34:12.655 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:34:12.655 INFO:teuthology.orchestra.run.vm03.stdout:Bootstrap complete. 2026-03-10T08:34:12.686 INFO:tasks.cephadm:Fetching config... 2026-03-10T08:34:12.686 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-10T08:34:12.686 DEBUG:teuthology.orchestra.run.vm03:> dd if=/etc/ceph/ceph.conf of=/dev/stdout 2026-03-10T08:34:12.711 INFO:tasks.cephadm:Fetching client.admin keyring... 2026-03-10T08:34:12.711 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-10T08:34:12.711 DEBUG:teuthology.orchestra.run.vm03:> dd if=/etc/ceph/ceph.client.admin.keyring of=/dev/stdout 2026-03-10T08:34:12.776 INFO:tasks.cephadm:Fetching mon keyring... 2026-03-10T08:34:12.776 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-10T08:34:12.776 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.a/keyring of=/dev/stdout 2026-03-10T08:34:12.844 INFO:tasks.cephadm:Fetching pub ssh key... 2026-03-10T08:34:12.844 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-10T08:34:12.844 DEBUG:teuthology.orchestra.run.vm03:> dd if=/home/ubuntu/cephtest/ceph.pub of=/dev/stdout 2026-03-10T08:34:12.901 INFO:tasks.cephadm:Installing pub ssh key for root users... 2026-03-10T08:34:12.901 DEBUG:teuthology.orchestra.run.vm03:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBuXPSpv1jRKlP0/9fO0gGOirooMs2vq663KoB8q2p2A ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-10T08:34:12.986 INFO:teuthology.orchestra.run.vm03.stdout:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBuXPSpv1jRKlP0/9fO0gGOirooMs2vq663KoB8q2p2A ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 2026-03-10T08:34:13.015 DEBUG:teuthology.orchestra.run.vm06:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBuXPSpv1jRKlP0/9fO0gGOirooMs2vq663KoB8q2p2A ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-10T08:34:13.053 INFO:teuthology.orchestra.run.vm06.stdout:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBuXPSpv1jRKlP0/9fO0gGOirooMs2vq663KoB8q2p2A ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 2026-03-10T08:34:13.064 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- ceph config set mgr mgr/cephadm/allow_ptrace true 2026-03-10T08:34:13.234 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.a/config 2026-03-10T08:34:13.257 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:13 vm03 ceph-mon[50703]: from='client.14162 v1:192.168.123.103:0/3841974177' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:34:13.257 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:13 vm03 ceph-mon[50703]: from='client.14164 v1:192.168.123.103:0/1397816787' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:34:13.257 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:13 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:13.257 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:13 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/872986961' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-10T08:34:13.257 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:13 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/2876254235' entity='client.admin' 2026-03-10T08:34:13.536 INFO:tasks.cephadm:Distributing conf and client.admin keyring to all hosts + 0755 2026-03-10T08:34:13.536 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- ceph orch client-keyring set client.admin '*' --mode 0755 2026-03-10T08:34:13.728 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.a/config 2026-03-10T08:34:14.012 INFO:tasks.cephadm:Writing (initial) conf and keyring to vm06 2026-03-10T08:34:14.012 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-10T08:34:14.012 DEBUG:teuthology.orchestra.run.vm06:> dd of=/etc/ceph/ceph.conf 2026-03-10T08:34:14.028 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-10T08:34:14.028 DEBUG:teuthology.orchestra.run.vm06:> dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T08:34:14.083 INFO:tasks.cephadm:Adding host vm06 to orchestrator... 2026-03-10T08:34:14.084 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- ceph orch host add vm06 2026-03-10T08:34:14.091 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:14 vm03 ceph-mon[50703]: mgrmap e12: y(active, since 2s) 2026-03-10T08:34:14.091 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:14 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/754064348' entity='client.admin' 2026-03-10T08:34:14.091 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:14 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:14.091 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:14 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:14.091 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:14 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T08:34:14.091 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:14 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:14.091 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:14 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:34:14.091 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:14 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:14.091 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:14 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:14.091 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:14 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:34:14.091 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:14 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:34:14.091 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:14 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:34:14.286 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.a/config 2026-03-10T08:34:15.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:15 vm03 ceph-mon[50703]: from='client.14172 v1:192.168.123.103:0/801173699' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:34:15.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:15 vm03 ceph-mon[50703]: Updating vm03:/etc/ceph/ceph.conf 2026-03-10T08:34:15.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:15 vm03 ceph-mon[50703]: Updating vm03:/var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/config/ceph.conf 2026-03-10T08:34:15.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:15 vm03 ceph-mon[50703]: Updating vm03:/etc/ceph/ceph.client.admin.keyring 2026-03-10T08:34:15.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:15 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:15.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:15 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:15.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:15 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:16.143 INFO:teuthology.orchestra.run.vm03.stdout:Added host 'vm06' with addr '192.168.123.106' 2026-03-10T08:34:16.190 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- ceph orch host ls --format=json 2026-03-10T08:34:16.372 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.a/config 2026-03-10T08:34:16.396 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:16 vm03 ceph-mon[50703]: Updating vm03:/var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/config/ceph.client.admin.keyring 2026-03-10T08:34:16.396 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:16 vm03 ceph-mon[50703]: from='client.14174 v1:192.168.123.103:0/3072443639' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm06", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:34:16.396 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:16 vm03 ceph-mon[50703]: Deploying cephadm binary to vm06 2026-03-10T08:34:16.608 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:34:16.608 INFO:teuthology.orchestra.run.vm03.stdout:[{"addr": "192.168.123.103", "hostname": "vm03", "labels": [], "status": ""}, {"addr": "192.168.123.106", "hostname": "vm06", "labels": [], "status": ""}] 2026-03-10T08:34:16.674 INFO:tasks.cephadm:Setting crush tunables to default 2026-03-10T08:34:16.674 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- ceph osd crush tunables default 2026-03-10T08:34:16.842 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.a/config 2026-03-10T08:34:17.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:17 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:17.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:17 vm03 ceph-mon[50703]: Added host vm06 2026-03-10T08:34:17.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:17 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:34:17.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:17 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:17.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:17 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:17.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:17 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/732763217' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-10T08:34:17.639 INFO:teuthology.orchestra.run.vm03.stderr:adjusted tunables profile to default 2026-03-10T08:34:17.760 INFO:tasks.cephadm:Adding mon.a on vm03 2026-03-10T08:34:17.760 INFO:tasks.cephadm:Adding mon.c on vm03 2026-03-10T08:34:17.760 INFO:tasks.cephadm:Adding mon.b on vm06 2026-03-10T08:34:17.760 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- ceph orch apply mon '3;vm03:[v1:192.168.123.103:6789]=a;vm03:[v1:192.168.123.103:6790]=c;vm06:[v1:192.168.123.106:6789]=b' 2026-03-10T08:34:17.997 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T08:34:18.043 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T08:34:18.330 INFO:teuthology.orchestra.run.vm06.stdout:Scheduled mon update... 2026-03-10T08:34:18.391 DEBUG:teuthology.orchestra.run.vm03:mon.c> sudo journalctl -f -n 0 -u ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@mon.c.service 2026-03-10T08:34:18.393 DEBUG:teuthology.orchestra.run.vm06:mon.b> sudo journalctl -f -n 0 -u ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@mon.b.service 2026-03-10T08:34:18.395 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-10T08:34:18.395 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- ceph mon dump -f json 2026-03-10T08:34:18.420 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:18 vm03 ceph-mon[50703]: from='client.14176 v1:192.168.123.103:0/2522628405' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T08:34:18.420 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:18 vm03 ceph-mon[50703]: mgrmap e13: y(active, since 6s) 2026-03-10T08:34:18.420 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:18 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:18.420 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:18 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/732763217' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-10T08:34:18.420 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:18 vm03 ceph-mon[50703]: osdmap e4: 0 total, 0 up, 0 in 2026-03-10T08:34:18.626 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T08:34:18.683 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T08:34:19.023 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:34:19.024 INFO:teuthology.orchestra.run.vm06.stdout:{"epoch":1,"fsid":"aaf0329a-1c5b-11f1-8b6f-7f2d819bb543","modified":"2026-03-10T08:33:49.085668Z","created":"2026-03-10T08:33:49.085668Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6789","nonce":0}]},"addr":"192.168.123.103:6789/0","public_addr":"192.168.123.103:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T08:34:19.024 INFO:teuthology.orchestra.run.vm06.stderr:dumped monmap epoch 1 2026-03-10T08:34:19.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:19 vm03 ceph-mon[50703]: from='client.14180 v1:192.168.123.106:0/3225535577' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm03:[v1:192.168.123.103:6789]=a;vm03:[v1:192.168.123.103:6790]=c;vm06:[v1:192.168.123.106:6789]=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:34:19.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:19 vm03 ceph-mon[50703]: Saving service mon spec with placement vm03:[v1:192.168.123.103:6789]=a;vm03:[v1:192.168.123.103:6790]=c;vm06:[v1:192.168.123.106:6789]=b;count:3 2026-03-10T08:34:19.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:19 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:19.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:19 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:19.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:19 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:19.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:19 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:19.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:19 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:19.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:19 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-10T08:34:19.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:19 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:34:19.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:19 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:34:19.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:19 vm03 ceph-mon[50703]: Updating vm06:/etc/ceph/ceph.conf 2026-03-10T08:34:19.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:19 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.106:0/3024107064' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T08:34:19.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:19 vm03 ceph-mon[50703]: Updating vm06:/var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/config/ceph.conf 2026-03-10T08:34:20.110 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-10T08:34:20.110 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- ceph mon dump -f json 2026-03-10T08:34:20.473 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.b/config 2026-03-10T08:34:20.912 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:34:20.912 INFO:teuthology.orchestra.run.vm06.stdout:{"epoch":1,"fsid":"aaf0329a-1c5b-11f1-8b6f-7f2d819bb543","modified":"2026-03-10T08:33:49.085668Z","created":"2026-03-10T08:33:49.085668Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6789","nonce":0}]},"addr":"192.168.123.103:6789/0","public_addr":"192.168.123.103:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T08:34:20.912 INFO:teuthology.orchestra.run.vm06.stderr:dumped monmap epoch 1 2026-03-10T08:34:20.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:20 vm03 ceph-mon[50703]: Updating vm06:/etc/ceph/ceph.client.admin.keyring 2026-03-10T08:34:20.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:20 vm03 ceph-mon[50703]: Updating vm06:/var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/config/ceph.client.admin.keyring 2026-03-10T08:34:20.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:20 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:20.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:20 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:20.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:20 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:20.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:20 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T08:34:20.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:20 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:34:20.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:20 vm03 ceph-mon[50703]: Deploying daemon mon.b on vm06 2026-03-10T08:34:21.340 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:21 vm06 ceph-mon[54477]: mon.b@-1(synchronizing).mgr e13 mkfs or daemon transitioned to available, loading commands 2026-03-10T08:34:21.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:21 vm03 systemd[1]: Starting Ceph mon.c for aaf0329a-1c5b-11f1-8b6f-7f2d819bb543... 2026-03-10T08:34:22.018 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-10T08:34:22.018 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- ceph mon dump -f json 2026-03-10T08:34:22.204 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.b/config 2026-03-10T08:34:22.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:21 vm03 podman[57146]: 2026-03-10 08:34:21.931676122 +0000 UTC m=+0.019210876 container create 0f628ab033756f69a1be1ef3e04d74af46ee2eb1f05092d0347a1da32a967ca8 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mon-c, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-10T08:34:22.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:21 vm03 podman[57146]: 2026-03-10 08:34:21.972986418 +0000 UTC m=+0.060521192 container init 0f628ab033756f69a1be1ef3e04d74af46ee2eb1f05092d0347a1da32a967ca8 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mon-c, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid) 2026-03-10T08:34:22.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:21 vm03 podman[57146]: 2026-03-10 08:34:21.976766711 +0000 UTC m=+0.064301455 container start 0f628ab033756f69a1be1ef3e04d74af46ee2eb1f05092d0347a1da32a967ca8 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mon-c, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True) 2026-03-10T08:34:22.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:21 vm03 bash[57146]: 0f628ab033756f69a1be1ef3e04d74af46ee2eb1f05092d0347a1da32a967ca8 2026-03-10T08:34:22.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:21 vm03 podman[57146]: 2026-03-10 08:34:21.922811186 +0000 UTC m=+0.010345949 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T08:34:22.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:21 vm03 systemd[1]: Started Ceph mon.c for aaf0329a-1c5b-11f1-8b6f-7f2d819bb543. 2026-03-10T08:34:22.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: set uid:gid to 167:167 (ceph:ceph) 2026-03-10T08:34:22.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 2 2026-03-10T08:34:22.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: pidfile_write: ignore empty --pid-file 2026-03-10T08:34:22.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: load: jerasure load: lrc 2026-03-10T08:34:22.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: RocksDB version: 7.9.2 2026-03-10T08:34:22.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Git sha 0 2026-03-10T08:34:22.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Compile date 2026-02-25 18:11:04 2026-03-10T08:34:22.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: DB SUMMARY 2026-03-10T08:34:22.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: DB Session ID: L1RMD1UTITC5XCQ7HP8E 2026-03-10T08:34:22.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: CURRENT file: CURRENT 2026-03-10T08:34:22.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: IDENTITY file: IDENTITY 2026-03-10T08:34:22.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: MANIFEST file: MANIFEST-000005 size: 59 Bytes 2026-03-10T08:34:22.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: SST files in /var/lib/ceph/mon/ceph-c/store.db dir, Total Num: 0, files: 2026-03-10T08:34:22.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-c/store.db: 000004.log size: 476 ; 2026-03-10T08:34:22.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.error_if_exists: 0 2026-03-10T08:34:22.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.create_if_missing: 0 2026-03-10T08:34:22.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.paranoid_checks: 1 2026-03-10T08:34:22.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.flush_verify_memtable_count: 1 2026-03-10T08:34:22.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-10T08:34:22.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-10T08:34:22.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.env: 0x55b3f51e7dc0 2026-03-10T08:34:22.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.fs: PosixFileSystem 2026-03-10T08:34:22.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.info_log: 0x55b3f68ac5c0 2026-03-10T08:34:22.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.max_file_opening_threads: 16 2026-03-10T08:34:22.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.statistics: (nil) 2026-03-10T08:34:22.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.use_fsync: 0 2026-03-10T08:34:22.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.max_log_file_size: 0 2026-03-10T08:34:22.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-10T08:34:22.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.log_file_time_to_roll: 0 2026-03-10T08:34:22.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.keep_log_file_num: 1000 2026-03-10T08:34:22.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.recycle_log_file_num: 0 2026-03-10T08:34:22.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.allow_fallocate: 1 2026-03-10T08:34:22.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.allow_mmap_reads: 0 2026-03-10T08:34:22.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.allow_mmap_writes: 0 2026-03-10T08:34:22.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.use_direct_reads: 0 2026-03-10T08:34:22.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-10T08:34:22.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.create_missing_column_families: 0 2026-03-10T08:34:22.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.db_log_dir: 2026-03-10T08:34:22.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.wal_dir: 2026-03-10T08:34:22.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.table_cache_numshardbits: 6 2026-03-10T08:34:22.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.WAL_ttl_seconds: 0 2026-03-10T08:34:22.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.WAL_size_limit_MB: 0 2026-03-10T08:34:22.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-10T08:34:22.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-10T08:34:22.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.is_fd_close_on_exec: 1 2026-03-10T08:34:22.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.advise_random_on_open: 1 2026-03-10T08:34:22.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.db_write_buffer_size: 0 2026-03-10T08:34:22.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.write_buffer_manager: 0x55b3f68b1900 2026-03-10T08:34:22.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-10T08:34:22.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-10T08:34:22.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.use_adaptive_mutex: 0 2026-03-10T08:34:22.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.rate_limiter: (nil) 2026-03-10T08:34:22.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-10T08:34:22.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.wal_recovery_mode: 2 2026-03-10T08:34:22.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.enable_thread_tracking: 0 2026-03-10T08:34:22.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.enable_pipelined_write: 0 2026-03-10T08:34:22.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.unordered_write: 0 2026-03-10T08:34:22.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-10T08:34:22.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-10T08:34:22.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-10T08:34:22.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-10T08:34:22.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.row_cache: None 2026-03-10T08:34:22.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.wal_filter: None 2026-03-10T08:34:22.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-10T08:34:22.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.allow_ingest_behind: 0 2026-03-10T08:34:22.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.two_write_queues: 0 2026-03-10T08:34:22.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.manual_wal_flush: 0 2026-03-10T08:34:22.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.wal_compression: 0 2026-03-10T08:34:22.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.atomic_flush: 0 2026-03-10T08:34:22.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-10T08:34:22.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.persist_stats_to_disk: 0 2026-03-10T08:34:22.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.write_dbid_to_manifest: 0 2026-03-10T08:34:22.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.log_readahead_size: 0 2026-03-10T08:34:22.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-10T08:34:22.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.best_efforts_recovery: 0 2026-03-10T08:34:22.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-10T08:34:22.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-10T08:34:22.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.allow_data_in_errors: 0 2026-03-10T08:34:22.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.db_host_id: __hostname__ 2026-03-10T08:34:22.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.enforce_single_del_contracts: true 2026-03-10T08:34:22.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.max_background_jobs: 2 2026-03-10T08:34:22.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.max_background_compactions: -1 2026-03-10T08:34:22.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.max_subcompactions: 1 2026-03-10T08:34:22.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-10T08:34:22.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-10T08:34:22.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.delayed_write_rate : 16777216 2026-03-10T08:34:22.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.max_total_wal_size: 0 2026-03-10T08:34:22.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-10T08:34:22.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.stats_dump_period_sec: 600 2026-03-10T08:34:22.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.stats_persist_period_sec: 600 2026-03-10T08:34:22.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-10T08:34:22.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.max_open_files: -1 2026-03-10T08:34:22.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.bytes_per_sync: 0 2026-03-10T08:34:22.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.wal_bytes_per_sync: 0 2026-03-10T08:34:22.431 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.strict_bytes_per_sync: 0 2026-03-10T08:34:22.431 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.compaction_readahead_size: 0 2026-03-10T08:34:22.431 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.max_background_flushes: -1 2026-03-10T08:34:22.431 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Compression algorithms supported: 2026-03-10T08:34:22.431 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: kZSTD supported: 0 2026-03-10T08:34:22.431 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: kXpressCompression supported: 0 2026-03-10T08:34:22.431 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: kBZip2Compression supported: 0 2026-03-10T08:34:22.431 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-10T08:34:22.431 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: kLZ4Compression supported: 1 2026-03-10T08:34:22.431 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: kZlibCompression supported: 1 2026-03-10T08:34:22.431 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: kLZ4HCCompression supported: 1 2026-03-10T08:34:22.431 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: kSnappyCompression supported: 1 2026-03-10T08:34:22.431 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Fast CRC32 supported: Supported on x86 2026-03-10T08:34:22.431 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: DMutex implementation: pthread_mutex_t 2026-03-10T08:34:22.431 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-c/store.db/MANIFEST-000005 2026-03-10T08:34:22.431 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-10T08:34:22.431 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-10T08:34:22.431 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.merge_operator: 2026-03-10T08:34:22.431 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.compaction_filter: None 2026-03-10T08:34:22.431 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.compaction_filter_factory: None 2026-03-10T08:34:22.431 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.sst_partitioner_factory: None 2026-03-10T08:34:22.431 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.memtable_factory: SkipListFactory 2026-03-10T08:34:22.431 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.table_factory: BlockBasedTable 2026-03-10T08:34:22.431 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b3f68ac5a0) 2026-03-10T08:34:22.431 INFO:journalctl@ceph.mon.c.vm03.stdout: cache_index_and_filter_blocks: 1 2026-03-10T08:34:22.431 INFO:journalctl@ceph.mon.c.vm03.stdout: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-10T08:34:22.431 INFO:journalctl@ceph.mon.c.vm03.stdout: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-10T08:34:22.431 INFO:journalctl@ceph.mon.c.vm03.stdout: pin_top_level_index_and_filter: 1 2026-03-10T08:34:22.431 INFO:journalctl@ceph.mon.c.vm03.stdout: index_type: 0 2026-03-10T08:34:22.431 INFO:journalctl@ceph.mon.c.vm03.stdout: data_block_index_type: 0 2026-03-10T08:34:22.431 INFO:journalctl@ceph.mon.c.vm03.stdout: index_shortening: 1 2026-03-10T08:34:22.431 INFO:journalctl@ceph.mon.c.vm03.stdout: data_block_hash_table_util_ratio: 0.750000 2026-03-10T08:34:22.431 INFO:journalctl@ceph.mon.c.vm03.stdout: checksum: 4 2026-03-10T08:34:22.431 INFO:journalctl@ceph.mon.c.vm03.stdout: no_block_cache: 0 2026-03-10T08:34:22.431 INFO:journalctl@ceph.mon.c.vm03.stdout: block_cache: 0x55b3f68d1350 2026-03-10T08:34:22.431 INFO:journalctl@ceph.mon.c.vm03.stdout: block_cache_name: BinnedLRUCache 2026-03-10T08:34:22.431 INFO:journalctl@ceph.mon.c.vm03.stdout: block_cache_options: 2026-03-10T08:34:22.431 INFO:journalctl@ceph.mon.c.vm03.stdout: capacity : 536870912 2026-03-10T08:34:22.431 INFO:journalctl@ceph.mon.c.vm03.stdout: num_shard_bits : 4 2026-03-10T08:34:22.431 INFO:journalctl@ceph.mon.c.vm03.stdout: strict_capacity_limit : 0 2026-03-10T08:34:22.431 INFO:journalctl@ceph.mon.c.vm03.stdout: high_pri_pool_ratio: 0.000 2026-03-10T08:34:22.431 INFO:journalctl@ceph.mon.c.vm03.stdout: block_cache_compressed: (nil) 2026-03-10T08:34:22.431 INFO:journalctl@ceph.mon.c.vm03.stdout: persistent_cache: (nil) 2026-03-10T08:34:22.431 INFO:journalctl@ceph.mon.c.vm03.stdout: block_size: 4096 2026-03-10T08:34:22.431 INFO:journalctl@ceph.mon.c.vm03.stdout: block_size_deviation: 10 2026-03-10T08:34:22.431 INFO:journalctl@ceph.mon.c.vm03.stdout: block_restart_interval: 16 2026-03-10T08:34:22.431 INFO:journalctl@ceph.mon.c.vm03.stdout: index_block_restart_interval: 1 2026-03-10T08:34:22.431 INFO:journalctl@ceph.mon.c.vm03.stdout: metadata_block_size: 4096 2026-03-10T08:34:22.431 INFO:journalctl@ceph.mon.c.vm03.stdout: partition_filters: 0 2026-03-10T08:34:22.431 INFO:journalctl@ceph.mon.c.vm03.stdout: use_delta_encoding: 1 2026-03-10T08:34:22.431 INFO:journalctl@ceph.mon.c.vm03.stdout: filter_policy: bloomfilter 2026-03-10T08:34:22.431 INFO:journalctl@ceph.mon.c.vm03.stdout: whole_key_filtering: 1 2026-03-10T08:34:22.431 INFO:journalctl@ceph.mon.c.vm03.stdout: verify_compression: 0 2026-03-10T08:34:22.431 INFO:journalctl@ceph.mon.c.vm03.stdout: read_amp_bytes_per_bit: 0 2026-03-10T08:34:22.431 INFO:journalctl@ceph.mon.c.vm03.stdout: format_version: 5 2026-03-10T08:34:22.431 INFO:journalctl@ceph.mon.c.vm03.stdout: enable_index_compression: 1 2026-03-10T08:34:22.431 INFO:journalctl@ceph.mon.c.vm03.stdout: block_align: 0 2026-03-10T08:34:22.431 INFO:journalctl@ceph.mon.c.vm03.stdout: max_auto_readahead_size: 262144 2026-03-10T08:34:22.431 INFO:journalctl@ceph.mon.c.vm03.stdout: prepopulate_block_cache: 0 2026-03-10T08:34:22.431 INFO:journalctl@ceph.mon.c.vm03.stdout: initial_auto_readahead_size: 8192 2026-03-10T08:34:22.431 INFO:journalctl@ceph.mon.c.vm03.stdout: num_file_reads_for_auto_readahead: 2 2026-03-10T08:34:22.432 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.write_buffer_size: 33554432 2026-03-10T08:34:22.432 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.max_write_buffer_number: 2 2026-03-10T08:34:22.432 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.compression: NoCompression 2026-03-10T08:34:22.432 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.bottommost_compression: Disabled 2026-03-10T08:34:22.432 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.prefix_extractor: nullptr 2026-03-10T08:34:22.432 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-10T08:34:22.432 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.num_levels: 7 2026-03-10T08:34:22.432 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-10T08:34:22.432 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-10T08:34:22.432 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-10T08:34:22.432 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-10T08:34:22.432 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-10T08:34:22.432 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-10T08:34:22.432 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-10T08:34:22.432 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-10T08:34:22.432 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-10T08:34:22.432 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-10T08:34:22.432 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-10T08:34:22.432 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-10T08:34:22.432 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.compression_opts.window_bits: -14 2026-03-10T08:34:22.432 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.compression_opts.level: 32767 2026-03-10T08:34:22.432 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.compression_opts.strategy: 0 2026-03-10T08:34:22.432 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-10T08:34:22.432 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-10T08:34:22.432 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-10T08:34:22.432 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-10T08:34:22.432 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.compression_opts.enabled: false 2026-03-10T08:34:22.432 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-10T08:34:22.432 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-10T08:34:22.432 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-10T08:34:22.432 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-10T08:34:22.432 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.target_file_size_base: 67108864 2026-03-10T08:34:22.432 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.target_file_size_multiplier: 1 2026-03-10T08:34:22.432 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-10T08:34:22.432 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-10T08:34:22.432 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-10T08:34:22.432 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-10T08:34:22.432 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-10T08:34:22.432 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-10T08:34:22.432 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-10T08:34:22.432 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-10T08:34:22.432 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-10T08:34:22.432 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-10T08:34:22.432 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-10T08:34:22.432 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-10T08:34:22.432 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-10T08:34:22.432 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.arena_block_size: 1048576 2026-03-10T08:34:22.432 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-10T08:34:22.432 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-10T08:34:22.432 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.disable_auto_compactions: 0 2026-03-10T08:34:22.432 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-10T08:34:22.432 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-10T08:34:22.432 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-10T08:34:22.432 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-10T08:34:22.432 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-10T08:34:22.432 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-10T08:34:22.432 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-10T08:34:22.432 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-10T08:34:22.432 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-10T08:34:22.432 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-10T08:34:22.432 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-10T08:34:22.432 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.inplace_update_support: 0 2026-03-10T08:34:22.433 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.inplace_update_num_locks: 10000 2026-03-10T08:34:22.433 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-10T08:34:22.433 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-10T08:34:22.433 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.memtable_huge_page_size: 0 2026-03-10T08:34:22.433 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.bloom_locality: 0 2026-03-10T08:34:22.433 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.max_successive_merges: 0 2026-03-10T08:34:22.433 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.optimize_filters_for_hits: 0 2026-03-10T08:34:22.433 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.paranoid_file_checks: 0 2026-03-10T08:34:22.433 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.force_consistency_checks: 1 2026-03-10T08:34:22.433 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.report_bg_io_stats: 0 2026-03-10T08:34:22.433 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.ttl: 2592000 2026-03-10T08:34:22.433 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.periodic_compaction_seconds: 0 2026-03-10T08:34:22.433 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-10T08:34:22.433 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-10T08:34:22.433 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.enable_blob_files: false 2026-03-10T08:34:22.433 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.min_blob_size: 0 2026-03-10T08:34:22.433 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.blob_file_size: 268435456 2026-03-10T08:34:22.433 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.blob_compression_type: NoCompression 2026-03-10T08:34:22.433 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.enable_blob_garbage_collection: false 2026-03-10T08:34:22.433 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-10T08:34:22.433 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-10T08:34:22.433 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-10T08:34:22.433 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.blob_file_starting_level: 0 2026-03-10T08:34:22.433 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-10T08:34:22.433 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-c/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 2026-03-10T08:34:22.433 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0 2026-03-10T08:34:22.433 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d56b38a6-7a77-4cf7-8c56-d7348c41302b 2026-03-10T08:34:22.433 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773131662007386, "job": 1, "event": "recovery_started", "wal_files": [4]} 2026-03-10T08:34:22.433 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2 2026-03-10T08:34:22.433 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773131662008146, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1608, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 488, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 366, "raw_average_value_size": 73, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773131662, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d56b38a6-7a77-4cf7-8c56-d7348c41302b", "db_session_id": "L1RMD1UTITC5XCQ7HP8E", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}} 2026-03-10T08:34:22.433 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773131662008224, "job": 1, "event": "recovery_finished"} 2026-03-10T08:34:22.433 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: [db/version_set.cc:5047] Creating manifest 10 2026-03-10T08:34:22.433 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-c/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-10T08:34:22.433 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55b3f68d2e00 2026-03-10T08:34:22.433 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: DB pointer 0x55b3f69ec000 2026-03-10T08:34:22.433 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-10T08:34:22.433 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-10T08:34:22.433 INFO:journalctl@ceph.mon.c.vm03.stdout: ** DB Stats ** 2026-03-10T08:34:22.433 INFO:journalctl@ceph.mon.c.vm03.stdout: Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T08:34:22.433 INFO:journalctl@ceph.mon.c.vm03.stdout: Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-10T08:34:22.433 INFO:journalctl@ceph.mon.c.vm03.stdout: Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T08:34:22.433 INFO:journalctl@ceph.mon.c.vm03.stdout: Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T08:34:22.433 INFO:journalctl@ceph.mon.c.vm03.stdout: Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-10T08:34:22.433 INFO:journalctl@ceph.mon.c.vm03.stdout: Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T08:34:22.433 INFO:journalctl@ceph.mon.c.vm03.stdout: Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T08:34:22.433 INFO:journalctl@ceph.mon.c.vm03.stdout: 2026-03-10T08:34:22.433 INFO:journalctl@ceph.mon.c.vm03.stdout: ** Compaction Stats [default] ** 2026-03-10T08:34:22.433 INFO:journalctl@ceph.mon.c.vm03.stdout: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T08:34:22.433 INFO:journalctl@ceph.mon.c.vm03.stdout: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-10T08:34:22.433 INFO:journalctl@ceph.mon.c.vm03.stdout: L0 1/0 1.57 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 2.1 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-10T08:34:22.433 INFO:journalctl@ceph.mon.c.vm03.stdout: Sum 1/0 1.57 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 2.1 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-10T08:34:22.433 INFO:journalctl@ceph.mon.c.vm03.stdout: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 2.1 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-10T08:34:22.433 INFO:journalctl@ceph.mon.c.vm03.stdout: 2026-03-10T08:34:22.433 INFO:journalctl@ceph.mon.c.vm03.stdout: ** Compaction Stats [default] ** 2026-03-10T08:34:22.433 INFO:journalctl@ceph.mon.c.vm03.stdout: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T08:34:22.433 INFO:journalctl@ceph.mon.c.vm03.stdout: --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-10T08:34:22.433 INFO:journalctl@ceph.mon.c.vm03.stdout: User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2.1 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-10T08:34:22.433 INFO:journalctl@ceph.mon.c.vm03.stdout: 2026-03-10T08:34:22.433 INFO:journalctl@ceph.mon.c.vm03.stdout: Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-10T08:34:22.433 INFO:journalctl@ceph.mon.c.vm03.stdout: 2026-03-10T08:34:22.433 INFO:journalctl@ceph.mon.c.vm03.stdout: Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T08:34:22.434 INFO:journalctl@ceph.mon.c.vm03.stdout: Flush(GB): cumulative 0.000, interval 0.000 2026-03-10T08:34:22.434 INFO:journalctl@ceph.mon.c.vm03.stdout: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-10T08:34:22.434 INFO:journalctl@ceph.mon.c.vm03.stdout: AddFile(Total Files): cumulative 0, interval 0 2026-03-10T08:34:22.434 INFO:journalctl@ceph.mon.c.vm03.stdout: AddFile(L0 Files): cumulative 0, interval 0 2026-03-10T08:34:22.434 INFO:journalctl@ceph.mon.c.vm03.stdout: AddFile(Keys): cumulative 0, interval 0 2026-03-10T08:34:22.434 INFO:journalctl@ceph.mon.c.vm03.stdout: Cumulative compaction: 0.00 GB write, 0.16 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T08:34:22.434 INFO:journalctl@ceph.mon.c.vm03.stdout: Interval compaction: 0.00 GB write, 0.16 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T08:34:22.434 INFO:journalctl@ceph.mon.c.vm03.stdout: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-10T08:34:22.434 INFO:journalctl@ceph.mon.c.vm03.stdout: Block cache BinnedLRUCache@0x55b3f68d1350#2 capacity: 512.00 MB usage: 0.80 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 9e-06 secs_since: 0 2026-03-10T08:34:22.434 INFO:journalctl@ceph.mon.c.vm03.stdout: Block cache entry stats(count,size,portion): DataBlock(1,0.58 KB,0.000110269%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%) 2026-03-10T08:34:22.434 INFO:journalctl@ceph.mon.c.vm03.stdout: 2026-03-10T08:34:22.434 INFO:journalctl@ceph.mon.c.vm03.stdout: ** File Read Latency Histogram By Level [default] ** 2026-03-10T08:34:22.434 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: mon.c does not exist in monmap, will attempt to join an existing cluster 2026-03-10T08:34:22.434 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: using public_addrv v1:192.168.123.103:6790/0 2026-03-10T08:34:22.434 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: starting mon.c rank -1 at public addrs v1:192.168.123.103:6790/0 at bind addrs v1:192.168.123.103:6790/0 mon_data /var/lib/ceph/mon/ceph-c fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 2026-03-10T08:34:22.434 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: mon.c@-1(???) e0 preinit fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 2026-03-10T08:34:22.434 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: mon.c@-1(synchronizing).mds e1 new map 2026-03-10T08:34:22.434 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: mon.c@-1(synchronizing).mds e1 print_map 2026-03-10T08:34:22.434 INFO:journalctl@ceph.mon.c.vm03.stdout: e1 2026-03-10T08:34:22.434 INFO:journalctl@ceph.mon.c.vm03.stdout: btime 2026-03-10T08:33:50:170504+0000 2026-03-10T08:34:22.434 INFO:journalctl@ceph.mon.c.vm03.stdout: enable_multiple, ever_enabled_multiple: 1,1 2026-03-10T08:34:22.434 INFO:journalctl@ceph.mon.c.vm03.stdout: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes} 2026-03-10T08:34:22.434 INFO:journalctl@ceph.mon.c.vm03.stdout: legacy client fscid: -1 2026-03-10T08:34:22.434 INFO:journalctl@ceph.mon.c.vm03.stdout: 2026-03-10T08:34:22.434 INFO:journalctl@ceph.mon.c.vm03.stdout: No filesystems configured 2026-03-10T08:34:22.434 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: mon.c@-1(synchronizing).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375 2026-03-10T08:34:22.434 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: mon.c@-1(synchronizing).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1 2026-03-10T08:34:22.434 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: mon.c@-1(synchronizing).osd e1 e1: 0 total, 0 up, 0 in 2026-03-10T08:34:22.434 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: mon.c@-1(synchronizing).osd e2 e2: 0 total, 0 up, 0 in 2026-03-10T08:34:22.434 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: mon.c@-1(synchronizing).osd e3 e3: 0 total, 0 up, 0 in 2026-03-10T08:34:22.434 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: mon.c@-1(synchronizing).osd e4 e4: 0 total, 0 up, 0 in 2026-03-10T08:34:22.434 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: mon.c@-1(synchronizing).osd e4 crush map has features 3314932999778484224, adjusting msgr requires 2026-03-10T08:34:22.434 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: mon.c@-1(synchronizing).osd e4 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T08:34:22.434 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: mon.c@-1(synchronizing).osd e4 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T08:34:22.434 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: mon.c@-1(synchronizing).osd e4 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T08:34:22.434 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: mkfs aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 2026-03-10T08:34:22.434 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T08:34:22.434 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T08:34:22.434 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: monmap epoch 1 2026-03-10T08:34:22.434 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 2026-03-10T08:34:22.434 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: last_changed 2026-03-10T08:33:49.085668+0000 2026-03-10T08:34:22.434 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: created 2026-03-10T08:33:49.085668+0000 2026-03-10T08:34:22.434 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: min_mon_release 19 (squid) 2026-03-10T08:34:22.434 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: election_strategy: 1 2026-03-10T08:34:22.434 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: 0: v1:192.168.123.103:6789/0 mon.a 2026-03-10T08:34:22.434 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: fsmap 2026-03-10T08:34:22.434 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: osdmap e1: 0 total, 0 up, 0 in 2026-03-10T08:34:22.434 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: mgrmap e1: no daemons active 2026-03-10T08:34:22.434 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/2093845549' entity='client.admin' 2026-03-10T08:34:22.434 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/1251670602' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T08:34:22.434 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/1371355316' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T08:34:22.434 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: Activating manager daemon y 2026-03-10T08:34:22.434 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: mgrmap e2: y(active, starting, since 0.00360955s) 2026-03-10T08:34:22.434 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14100 v1:192.168.123.103:0/3325065657' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T08:34:22.434 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14100 v1:192.168.123.103:0/3325065657' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T08:34:22.434 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14100 v1:192.168.123.103:0/3325065657' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T08:34:22.434 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14100 v1:192.168.123.103:0/3325065657' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T08:34:22.434 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14100 v1:192.168.123.103:0/3325065657' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T08:34:22.434 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: Manager daemon y is now available 2026-03-10T08:34:22.434 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14100 v1:192.168.123.103:0/3325065657' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T08:34:22.434 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14100 v1:192.168.123.103:0/3325065657' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T08:34:22.435 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14100 v1:192.168.123.103:0/3325065657' entity='mgr.y' 2026-03-10T08:34:22.435 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14100 v1:192.168.123.103:0/3325065657' entity='mgr.y' 2026-03-10T08:34:22.435 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14100 v1:192.168.123.103:0/3325065657' entity='mgr.y' 2026-03-10T08:34:22.435 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: mgrmap e3: y(active, since 1.00917s) 2026-03-10T08:34:22.435 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/3730691492' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T08:34:22.435 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/2495825816' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-10T08:34:22.435 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: mgrmap e4: y(active, since 2s) 2026-03-10T08:34:22.435 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/3731099654' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-10T08:34:22.435 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/3731099654' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-10T08:34:22.435 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: mgrmap e5: y(active, since 3s) 2026-03-10T08:34:22.435 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/3139609863' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T08:34:22.435 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: Active manager daemon y restarted 2026-03-10T08:34:22.435 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: Activating manager daemon y 2026-03-10T08:34:22.435 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: osdmap e2: 0 total, 0 up, 0 in 2026-03-10T08:34:22.435 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: mgrmap e6: y(active, starting, since 0.0058695s) 2026-03-10T08:34:22.435 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14118 v1:192.168.123.103:0/1968097009' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T08:34:22.435 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14118 v1:192.168.123.103:0/1968097009' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T08:34:22.435 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14118 v1:192.168.123.103:0/1968097009' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T08:34:22.435 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14118 v1:192.168.123.103:0/1968097009' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T08:34:22.435 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14118 v1:192.168.123.103:0/1968097009' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T08:34:22.435 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: Manager daemon y is now available 2026-03-10T08:34:22.435 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14118 v1:192.168.123.103:0/1968097009' entity='mgr.y' 2026-03-10T08:34:22.435 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14118 v1:192.168.123.103:0/1968097009' entity='mgr.y' 2026-03-10T08:34:22.435 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14118 v1:192.168.123.103:0/1968097009' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:34:22.435 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14118 v1:192.168.123.103:0/1968097009' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:34:22.435 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14118 v1:192.168.123.103:0/1968097009' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T08:34:22.435 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14118 v1:192.168.123.103:0/1968097009' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T08:34:22.435 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: Found migration_current of "None". Setting to last migration. 2026-03-10T08:34:22.435 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14118 v1:192.168.123.103:0/1968097009' entity='mgr.y' 2026-03-10T08:34:22.435 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14118 v1:192.168.123.103:0/1968097009' entity='mgr.y' 2026-03-10T08:34:22.435 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: [10/Mar/2026:08:34:02] ENGINE Bus STARTING 2026-03-10T08:34:22.435 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: [10/Mar/2026:08:34:02] ENGINE Serving on http://192.168.123.103:8765 2026-03-10T08:34:22.435 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: mgrmap e7: y(active, since 1.00937s) 2026-03-10T08:34:22.435 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='client.14122 v1:192.168.123.103:0/3926870975' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T08:34:22.435 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14118 v1:192.168.123.103:0/1968097009' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:34:22.435 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14118 v1:192.168.123.103:0/1968097009' entity='mgr.y' 2026-03-10T08:34:22.435 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14118 v1:192.168.123.103:0/1968097009' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:34:22.435 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='client.14122 v1:192.168.123.103:0/3926870975' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T08:34:22.435 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: [10/Mar/2026:08:34:02] ENGINE Serving on https://192.168.123.103:7150 2026-03-10T08:34:22.435 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: [10/Mar/2026:08:34:02] ENGINE Bus STARTED 2026-03-10T08:34:22.435 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: [10/Mar/2026:08:34:02] ENGINE Client ('192.168.123.103', 37520) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T08:34:22.435 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='client.14130 v1:192.168.123.103:0/4060812142' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:34:22.435 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='client.14132 v1:192.168.123.103:0/2253738819' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:34:22.435 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='client.14134 v1:192.168.123.103:0/4153293553' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:34:22.435 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: Generating ssh key... 2026-03-10T08:34:22.435 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14118 v1:192.168.123.103:0/1968097009' entity='mgr.y' 2026-03-10T08:34:22.435 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14118 v1:192.168.123.103:0/1968097009' entity='mgr.y' 2026-03-10T08:34:22.435 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='client.14136 v1:192.168.123.103:0/1942120639' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:34:22.435 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='client.14138 v1:192.168.123.103:0/517286996' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm03", "addr": "192.168.123.103", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:34:22.435 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: Deploying cephadm binary to vm03 2026-03-10T08:34:22.435 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: mgrmap e8: y(active, since 2s) 2026-03-10T08:34:22.435 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14118 v1:192.168.123.103:0/1968097009' entity='mgr.y' 2026-03-10T08:34:22.435 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14118 v1:192.168.123.103:0/1968097009' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:34:22.435 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: Added host vm03 2026-03-10T08:34:22.435 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='client.14140 v1:192.168.123.103:0/911318133' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:34:22.435 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: Saving service mon spec with placement count:5 2026-03-10T08:34:22.435 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14118 v1:192.168.123.103:0/1968097009' entity='mgr.y' 2026-03-10T08:34:22.435 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14118 v1:192.168.123.103:0/1968097009' entity='mgr.y' 2026-03-10T08:34:22.435 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/4062718977' entity='client.admin' 2026-03-10T08:34:22.435 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/822515236' entity='client.admin' 2026-03-10T08:34:22.435 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='client.14142 v1:192.168.123.103:0/3963200659' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:34:22.436 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: Saving service mgr spec with placement count:2 2026-03-10T08:34:22.436 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/3341707029' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-10T08:34:22.436 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14118 v1:192.168.123.103:0/1968097009' entity='mgr.y' 2026-03-10T08:34:22.436 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14118 v1:192.168.123.103:0/1968097009' entity='mgr.y' 2026-03-10T08:34:22.436 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14118 v1:192.168.123.103:0/1968097009' entity='mgr.y' 2026-03-10T08:34:22.436 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/3341707029' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-10T08:34:22.436 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: mgrmap e9: y(active, since 5s) 2026-03-10T08:34:22.436 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/3728715497' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T08:34:22.436 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: Active manager daemon y restarted 2026-03-10T08:34:22.436 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: Activating manager daemon y 2026-03-10T08:34:22.436 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: osdmap e3: 0 total, 0 up, 0 in 2026-03-10T08:34:22.436 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: mgrmap e10: y(active, starting, since 0.00591906s) 2026-03-10T08:34:22.436 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T08:34:22.436 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T08:34:22.436 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T08:34:22.436 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T08:34:22.436 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T08:34:22.436 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: Manager daemon y is now available 2026-03-10T08:34:22.436 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:34:22.436 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T08:34:22.436 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T08:34:22.436 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:22.436 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: [10/Mar/2026:08:34:10] ENGINE Bus STARTING 2026-03-10T08:34:22.436 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: [10/Mar/2026:08:34:11] ENGINE Serving on http://192.168.123.103:8765 2026-03-10T08:34:22.436 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:22.436 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: [10/Mar/2026:08:34:11] ENGINE Serving on https://192.168.123.103:7150 2026-03-10T08:34:22.436 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: [10/Mar/2026:08:34:11] ENGINE Bus STARTED 2026-03-10T08:34:22.436 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: [10/Mar/2026:08:34:11] ENGINE Client ('192.168.123.103', 37530) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T08:34:22.436 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: mgrmap e11: y(active, since 1.00927s) 2026-03-10T08:34:22.436 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='client.14154 v1:192.168.123.103:0/2670786933' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T08:34:22.436 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='client.14154 v1:192.168.123.103:0/2670786933' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T08:34:22.436 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:22.436 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:22.436 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='client.14162 v1:192.168.123.103:0/3841974177' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:34:22.436 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='client.14164 v1:192.168.123.103:0/1397816787' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:34:22.436 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:22.436 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/872986961' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-10T08:34:22.436 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/2876254235' entity='client.admin' 2026-03-10T08:34:22.436 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: mgrmap e12: y(active, since 2s) 2026-03-10T08:34:22.436 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/754064348' entity='client.admin' 2026-03-10T08:34:22.436 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:22.436 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:22.436 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T08:34:22.436 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:22.436 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:34:22.436 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:22.436 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:22.436 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:34:22.436 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:34:22.436 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:34:22.437 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='client.14172 v1:192.168.123.103:0/801173699' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:34:22.437 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: Updating vm03:/etc/ceph/ceph.conf 2026-03-10T08:34:22.437 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: Updating vm03:/var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/config/ceph.conf 2026-03-10T08:34:22.437 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: Updating vm03:/etc/ceph/ceph.client.admin.keyring 2026-03-10T08:34:22.437 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:22.437 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:22.437 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:22.437 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: Updating vm03:/var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/config/ceph.client.admin.keyring 2026-03-10T08:34:22.437 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='client.14174 v1:192.168.123.103:0/3072443639' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm06", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:34:22.437 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: Deploying cephadm binary to vm06 2026-03-10T08:34:22.437 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:22.437 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: Added host vm06 2026-03-10T08:34:22.437 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:34:22.437 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:22.437 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:22.437 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/732763217' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-10T08:34:22.437 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='client.14176 v1:192.168.123.103:0/2522628405' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T08:34:22.437 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: mgrmap e13: y(active, since 6s) 2026-03-10T08:34:22.437 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:22.437 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/732763217' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-10T08:34:22.437 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: osdmap e4: 0 total, 0 up, 0 in 2026-03-10T08:34:22.437 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='client.14180 v1:192.168.123.106:0/3225535577' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm03:[v1:192.168.123.103:6789]=a;vm03:[v1:192.168.123.103:6790]=c;vm06:[v1:192.168.123.106:6789]=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:34:22.437 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: Saving service mon spec with placement vm03:[v1:192.168.123.103:6789]=a;vm03:[v1:192.168.123.103:6790]=c;vm06:[v1:192.168.123.106:6789]=b;count:3 2026-03-10T08:34:22.437 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:22.437 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:22.437 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:22.437 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:22.437 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:22.437 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-10T08:34:22.437 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:34:22.437 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:34:22.437 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: Updating vm06:/etc/ceph/ceph.conf 2026-03-10T08:34:22.437 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.106:0/3024107064' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T08:34:22.437 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: Updating vm06:/var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/config/ceph.conf 2026-03-10T08:34:22.437 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: Updating vm06:/etc/ceph/ceph.client.admin.keyring 2026-03-10T08:34:22.437 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: Updating vm06:/var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/config/ceph.client.admin.keyring 2026-03-10T08:34:22.437 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:22.437 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:22.437 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:22.437 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T08:34:22.437 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:34:22.437 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: Deploying daemon mon.b on vm06 2026-03-10T08:34:22.437 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:22 vm03 ceph-mon[57160]: mon.c@-1(synchronizing).paxosservice(auth 1..3) refresh upgraded, format 0 -> 3 2026-03-10T08:34:26.291 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:34:26.291 INFO:teuthology.orchestra.run.vm06.stdout:{"epoch":2,"fsid":"aaf0329a-1c5b-11f1-8b6f-7f2d819bb543","modified":"2026-03-10T08:34:21.105143Z","created":"2026-03-10T08:33:49.085668Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6789","nonce":0}]},"addr":"192.168.123.103:6789/0","public_addr":"192.168.123.103:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"b","public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6789","nonce":0}]},"addr":"192.168.123.106:6789/0","public_addr":"192.168.123.106:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1]} 2026-03-10T08:34:26.291 INFO:teuthology.orchestra.run.vm06.stderr:dumped monmap epoch 2 2026-03-10T08:34:26.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:26 vm03 ceph-mon[50703]: Deploying daemon mon.c on vm03 2026-03-10T08:34:26.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:26 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T08:34:26.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:26 vm03 ceph-mon[50703]: mon.a calling monitor election 2026-03-10T08:34:26.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:26 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T08:34:26.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:26 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T08:34:26.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:26 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T08:34:26.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:26 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T08:34:26.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:26 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T08:34:26.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:26 vm03 ceph-mon[50703]: mon.b calling monitor election 2026-03-10T08:34:26.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:26 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T08:34:26.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:26 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T08:34:26.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:26 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T08:34:26.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:26 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T08:34:26.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:26 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T08:34:26.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:26 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T08:34:26.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:26 vm03 ceph-mon[50703]: mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-10T08:34:26.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:26 vm03 ceph-mon[50703]: monmap epoch 2 2026-03-10T08:34:26.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:26 vm03 ceph-mon[50703]: fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 2026-03-10T08:34:26.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:26 vm03 ceph-mon[50703]: last_changed 2026-03-10T08:34:21.105143+0000 2026-03-10T08:34:26.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:26 vm03 ceph-mon[50703]: created 2026-03-10T08:33:49.085668+0000 2026-03-10T08:34:26.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:26 vm03 ceph-mon[50703]: min_mon_release 19 (squid) 2026-03-10T08:34:26.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:26 vm03 ceph-mon[50703]: election_strategy: 1 2026-03-10T08:34:26.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:26 vm03 ceph-mon[50703]: 0: v1:192.168.123.103:6789/0 mon.a 2026-03-10T08:34:26.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:26 vm03 ceph-mon[50703]: 1: v1:192.168.123.106:6789/0 mon.b 2026-03-10T08:34:26.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:26 vm03 ceph-mon[50703]: fsmap 2026-03-10T08:34:26.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:26 vm03 ceph-mon[50703]: osdmap e4: 0 total, 0 up, 0 in 2026-03-10T08:34:26.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:26 vm03 ceph-mon[50703]: mgrmap e13: y(active, since 15s) 2026-03-10T08:34:26.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:26 vm03 ceph-mon[50703]: overall HEALTH_OK 2026-03-10T08:34:26.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:26 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:26.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:26 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:26.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:26 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:26.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:26 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:26.514 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:26 vm06 ceph-mon[54477]: Deploying daemon mon.c on vm03 2026-03-10T08:34:26.514 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:26 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T08:34:26.515 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:26 vm06 ceph-mon[54477]: mon.a calling monitor election 2026-03-10T08:34:26.515 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:26 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T08:34:26.515 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:26 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T08:34:26.515 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:26 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T08:34:26.515 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:26 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T08:34:26.515 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:26 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T08:34:26.515 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:26 vm06 ceph-mon[54477]: mon.b calling monitor election 2026-03-10T08:34:26.515 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:26 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T08:34:26.515 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:26 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T08:34:26.515 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:26 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T08:34:26.515 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:26 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T08:34:26.515 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:26 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T08:34:26.515 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:26 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T08:34:26.515 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:26 vm06 ceph-mon[54477]: mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-10T08:34:26.515 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:26 vm06 ceph-mon[54477]: monmap epoch 2 2026-03-10T08:34:26.515 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:26 vm06 ceph-mon[54477]: fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 2026-03-10T08:34:26.515 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:26 vm06 ceph-mon[54477]: last_changed 2026-03-10T08:34:21.105143+0000 2026-03-10T08:34:26.515 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:26 vm06 ceph-mon[54477]: created 2026-03-10T08:33:49.085668+0000 2026-03-10T08:34:26.515 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:26 vm06 ceph-mon[54477]: min_mon_release 19 (squid) 2026-03-10T08:34:26.515 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:26 vm06 ceph-mon[54477]: election_strategy: 1 2026-03-10T08:34:26.515 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:26 vm06 ceph-mon[54477]: 0: v1:192.168.123.103:6789/0 mon.a 2026-03-10T08:34:26.515 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:26 vm06 ceph-mon[54477]: 1: v1:192.168.123.106:6789/0 mon.b 2026-03-10T08:34:26.515 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:26 vm06 ceph-mon[54477]: fsmap 2026-03-10T08:34:26.515 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:26 vm06 ceph-mon[54477]: osdmap e4: 0 total, 0 up, 0 in 2026-03-10T08:34:26.515 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:26 vm06 ceph-mon[54477]: mgrmap e13: y(active, since 15s) 2026-03-10T08:34:26.515 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:26 vm06 ceph-mon[54477]: overall HEALTH_OK 2026-03-10T08:34:26.515 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:26 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:26.515 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:26 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:26.515 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:26 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:26.515 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:26 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:27.364 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-10T08:34:27.365 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- ceph mon dump -f json 2026-03-10T08:34:27.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:27 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:34:27.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:27 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.106:0/3441164969' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T08:34:27.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:27 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:27.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:27 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:27.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:27 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:27.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:27 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:27.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:27 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:34:27.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:27 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:34:27.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:27 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:27.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:27 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:27.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:27 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:27.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:27 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:27.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:27 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:27.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:27 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:27.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:27 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:27.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:27 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:27.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:27 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:27.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:27 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T08:34:27.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:27 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T08:34:27.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:27 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:34:27.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:27 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T08:34:27.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:27 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T08:34:27.558 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.b/config 2026-03-10T08:34:27.586 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:27 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:34:27.586 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:27 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.106:0/3441164969' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T08:34:27.586 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:27 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:27.586 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:27 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:27.586 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:27 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:27.586 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:27 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:27.586 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:27 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:34:27.586 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:27 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:34:27.586 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:27 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:27.586 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:27 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:27.586 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:27 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:27.586 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:27 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:27.586 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:27 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:27.586 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:27 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:27.586 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:27 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:27.586 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:27 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:27.586 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:27 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:27.586 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:27 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T08:34:27.586 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:27 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T08:34:27.586 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:27 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:34:27.587 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:27 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T08:34:27.587 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:27 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T08:34:27.868 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:34:27.868 INFO:teuthology.orchestra.run.vm06.stdout:{"epoch":2,"fsid":"aaf0329a-1c5b-11f1-8b6f-7f2d819bb543","modified":"2026-03-10T08:34:21.105143Z","created":"2026-03-10T08:33:49.085668Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6789","nonce":0}]},"addr":"192.168.123.103:6789/0","public_addr":"192.168.123.103:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"b","public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6789","nonce":0}]},"addr":"192.168.123.106:6789/0","public_addr":"192.168.123.106:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1]} 2026-03-10T08:34:27.868 INFO:teuthology.orchestra.run.vm06.stderr:dumped monmap epoch 2 2026-03-10T08:34:28.428 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:34:28 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:34:28.104+0000 7faf6a5f1640 -1 mgr.server handle_report got status from non-daemon mon.b 2026-03-10T08:34:28.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:28 vm03 ceph-mon[57160]: Deploying daemon mon.c on vm03 2026-03-10T08:34:28.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:28 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T08:34:28.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:28 vm03 ceph-mon[57160]: mon.a calling monitor election 2026-03-10T08:34:28.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:28 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T08:34:28.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:28 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T08:34:28.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:28 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T08:34:28.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:28 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T08:34:28.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:28 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T08:34:28.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:28 vm03 ceph-mon[57160]: mon.b calling monitor election 2026-03-10T08:34:28.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:28 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T08:34:28.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:28 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T08:34:28.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:28 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T08:34:28.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:28 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T08:34:28.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:28 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T08:34:28.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:28 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T08:34:28.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:28 vm03 ceph-mon[57160]: mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-10T08:34:28.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:28 vm03 ceph-mon[57160]: monmap epoch 2 2026-03-10T08:34:28.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:28 vm03 ceph-mon[57160]: fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 2026-03-10T08:34:28.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:28 vm03 ceph-mon[57160]: last_changed 2026-03-10T08:34:21.105143+0000 2026-03-10T08:34:28.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:28 vm03 ceph-mon[57160]: created 2026-03-10T08:33:49.085668+0000 2026-03-10T08:34:28.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:28 vm03 ceph-mon[57160]: min_mon_release 19 (squid) 2026-03-10T08:34:28.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:28 vm03 ceph-mon[57160]: election_strategy: 1 2026-03-10T08:34:28.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:28 vm03 ceph-mon[57160]: 0: v1:192.168.123.103:6789/0 mon.a 2026-03-10T08:34:28.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:28 vm03 ceph-mon[57160]: 1: v1:192.168.123.106:6789/0 mon.b 2026-03-10T08:34:28.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:28 vm03 ceph-mon[57160]: fsmap 2026-03-10T08:34:28.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:28 vm03 ceph-mon[57160]: osdmap e4: 0 total, 0 up, 0 in 2026-03-10T08:34:28.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:28 vm03 ceph-mon[57160]: mgrmap e13: y(active, since 15s) 2026-03-10T08:34:28.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:28 vm03 ceph-mon[57160]: overall HEALTH_OK 2026-03-10T08:34:28.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:28 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:28.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:28 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:28.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:28 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:28.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:28 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:28.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:28 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:34:28.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:28 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.106:0/3441164969' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T08:34:28.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:28 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:28.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:28 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:28.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:28 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:28.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:28 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:28.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:28 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:34:28.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:28 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:34:28.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:28 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:28.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:28 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:28.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:28 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:28.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:28 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:28.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:28 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:28.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:28 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:28.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:28 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:28.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:28 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:28.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:28 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:28.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:28 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T08:34:28.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:28 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T08:34:28.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:28 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:34:28.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:28 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T08:34:28.430 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:28 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T08:34:28.985 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-10T08:34:28.985 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- ceph mon dump -f json 2026-03-10T08:34:29.193 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.b/config 2026-03-10T08:34:33.237 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:34:33.237 INFO:teuthology.orchestra.run.vm06.stdout:{"epoch":3,"fsid":"aaf0329a-1c5b-11f1-8b6f-7f2d819bb543","modified":"2026-03-10T08:34:28.060509Z","created":"2026-03-10T08:33:49.085668Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6789","nonce":0}]},"addr":"192.168.123.103:6789/0","public_addr":"192.168.123.103:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"b","public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6789","nonce":0}]},"addr":"192.168.123.106:6789/0","public_addr":"192.168.123.106:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"c","public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6790","nonce":0}]},"addr":"192.168.123.103:6790/0","public_addr":"192.168.123.103:6790/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1]} 2026-03-10T08:34:33.237 INFO:teuthology.orchestra.run.vm06.stderr:dumped monmap epoch 3 2026-03-10T08:34:33.317 INFO:tasks.cephadm:Generating final ceph.conf file... 2026-03-10T08:34:33.318 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- ceph config generate-minimal-conf 2026-03-10T08:34:33.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:33 vm03 ceph-mon[50703]: Reconfiguring mon.c (monmap changed)... 2026-03-10T08:34:33.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:33 vm03 ceph-mon[50703]: Reconfiguring daemon mon.c on vm03 2026-03-10T08:34:33.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:33 vm03 ceph-mon[50703]: Reconfiguring mon.b (monmap changed)... 2026-03-10T08:34:33.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:33 vm03 ceph-mon[50703]: Reconfiguring daemon mon.b on vm06 2026-03-10T08:34:33.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:33 vm03 ceph-mon[50703]: mon.b calling monitor election 2026-03-10T08:34:33.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:33 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T08:34:33.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:33 vm03 ceph-mon[50703]: mon.a calling monitor election 2026-03-10T08:34:33.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:33 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T08:34:33.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:33 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T08:34:33.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:33 vm03 ceph-mon[50703]: Reconfiguring mon.a (monmap changed)... 2026-03-10T08:34:33.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:33 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T08:34:33.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:33 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T08:34:33.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:33 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T08:34:33.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:33 vm03 ceph-mon[50703]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T08:34:33.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:33 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T08:34:33.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:33 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T08:34:33.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:33 vm03 ceph-mon[50703]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T08:34:33.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:33 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T08:34:33.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:33 vm03 ceph-mon[50703]: mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-10T08:34:33.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:33 vm03 ceph-mon[50703]: monmap epoch 3 2026-03-10T08:34:33.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:33 vm03 ceph-mon[50703]: fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 2026-03-10T08:34:33.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:33 vm03 ceph-mon[50703]: last_changed 2026-03-10T08:34:28.060509+0000 2026-03-10T08:34:33.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:33 vm03 ceph-mon[50703]: created 2026-03-10T08:33:49.085668+0000 2026-03-10T08:34:33.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:33 vm03 ceph-mon[50703]: min_mon_release 19 (squid) 2026-03-10T08:34:33.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:33 vm03 ceph-mon[50703]: election_strategy: 1 2026-03-10T08:34:33.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:33 vm03 ceph-mon[50703]: 0: v1:192.168.123.103:6789/0 mon.a 2026-03-10T08:34:33.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:33 vm03 ceph-mon[50703]: 1: v1:192.168.123.106:6789/0 mon.b 2026-03-10T08:34:33.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:33 vm03 ceph-mon[50703]: 2: v1:192.168.123.103:6790/0 mon.c 2026-03-10T08:34:33.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:33 vm03 ceph-mon[50703]: fsmap 2026-03-10T08:34:33.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:33 vm03 ceph-mon[50703]: osdmap e4: 0 total, 0 up, 0 in 2026-03-10T08:34:33.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:33 vm03 ceph-mon[50703]: mgrmap e13: y(active, since 22s) 2026-03-10T08:34:33.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:33 vm03 ceph-mon[50703]: overall HEALTH_OK 2026-03-10T08:34:33.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:33 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T08:34:33.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:33 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:34:33.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:33 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:33.537 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.a/config 2026-03-10T08:34:33.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:33 vm06 ceph-mon[54477]: Reconfiguring mon.c (monmap changed)... 2026-03-10T08:34:33.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:33 vm06 ceph-mon[54477]: Reconfiguring daemon mon.c on vm03 2026-03-10T08:34:33.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:33 vm06 ceph-mon[54477]: Reconfiguring mon.b (monmap changed)... 2026-03-10T08:34:33.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:33 vm06 ceph-mon[54477]: Reconfiguring daemon mon.b on vm06 2026-03-10T08:34:33.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:33 vm06 ceph-mon[54477]: mon.b calling monitor election 2026-03-10T08:34:33.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:33 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T08:34:33.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:33 vm06 ceph-mon[54477]: mon.a calling monitor election 2026-03-10T08:34:33.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:33 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T08:34:33.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:33 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T08:34:33.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:33 vm06 ceph-mon[54477]: Reconfiguring mon.a (monmap changed)... 2026-03-10T08:34:33.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:33 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T08:34:33.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:33 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T08:34:33.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:33 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T08:34:33.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:33 vm06 ceph-mon[54477]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T08:34:33.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:33 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T08:34:33.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:33 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T08:34:33.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:33 vm06 ceph-mon[54477]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T08:34:33.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:33 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T08:34:33.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:33 vm06 ceph-mon[54477]: mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-10T08:34:33.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:33 vm06 ceph-mon[54477]: monmap epoch 3 2026-03-10T08:34:33.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:33 vm06 ceph-mon[54477]: fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 2026-03-10T08:34:33.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:33 vm06 ceph-mon[54477]: last_changed 2026-03-10T08:34:28.060509+0000 2026-03-10T08:34:33.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:33 vm06 ceph-mon[54477]: created 2026-03-10T08:33:49.085668+0000 2026-03-10T08:34:33.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:33 vm06 ceph-mon[54477]: min_mon_release 19 (squid) 2026-03-10T08:34:33.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:33 vm06 ceph-mon[54477]: election_strategy: 1 2026-03-10T08:34:33.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:33 vm06 ceph-mon[54477]: 0: v1:192.168.123.103:6789/0 mon.a 2026-03-10T08:34:33.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:33 vm06 ceph-mon[54477]: 1: v1:192.168.123.106:6789/0 mon.b 2026-03-10T08:34:33.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:33 vm06 ceph-mon[54477]: 2: v1:192.168.123.103:6790/0 mon.c 2026-03-10T08:34:33.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:33 vm06 ceph-mon[54477]: fsmap 2026-03-10T08:34:33.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:33 vm06 ceph-mon[54477]: osdmap e4: 0 total, 0 up, 0 in 2026-03-10T08:34:33.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:33 vm06 ceph-mon[54477]: mgrmap e13: y(active, since 22s) 2026-03-10T08:34:33.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:33 vm06 ceph-mon[54477]: overall HEALTH_OK 2026-03-10T08:34:33.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:33 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T08:34:33.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:33 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:34:33.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:33 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:33.827 INFO:teuthology.orchestra.run.vm03.stdout:# minimal ceph.conf for aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 2026-03-10T08:34:33.827 INFO:teuthology.orchestra.run.vm03.stdout:[global] 2026-03-10T08:34:33.827 INFO:teuthology.orchestra.run.vm03.stdout: fsid = aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 2026-03-10T08:34:33.827 INFO:teuthology.orchestra.run.vm03.stdout: mon_host = 192.168.123.103:6789/0 192.168.123.106:6789/0 v1:192.168.123.103:6790/0 2026-03-10T08:34:33.873 INFO:tasks.cephadm:Distributing (final) config and client.admin keyring... 2026-03-10T08:34:33.874 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-10T08:34:33.874 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/etc/ceph/ceph.conf 2026-03-10T08:34:33.900 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-10T08:34:33.901 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T08:34:33.967 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-10T08:34:33.967 DEBUG:teuthology.orchestra.run.vm06:> sudo dd of=/etc/ceph/ceph.conf 2026-03-10T08:34:33.997 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-10T08:34:33.997 DEBUG:teuthology.orchestra.run.vm06:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T08:34:34.063 INFO:tasks.cephadm:Adding mgr.y on vm03 2026-03-10T08:34:34.063 INFO:tasks.cephadm:Adding mgr.x on vm06 2026-03-10T08:34:34.063 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- ceph orch apply mgr '2;vm03=y;vm06=x' 2026-03-10T08:34:34.275 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.b/config 2026-03-10T08:34:34.304 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:34 vm06 ceph-mon[54477]: Reconfiguring daemon mon.a on vm03 2026-03-10T08:34:34.305 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:34 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.106:0/3554382456' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T08:34:34.305 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:34 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:34.305 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:34 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:34.305 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:34 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T08:34:34.305 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:34 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T08:34:34.305 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:34 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:34:34.305 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:34 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:34.305 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:34 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:34.305 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:34 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T08:34:34.305 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:34 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T08:34:34.305 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:34 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:34:34.305 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:34 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/901366949' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:34:34.305 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:34 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:34.305 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:34 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:34.305 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:34 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T08:34:34.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:34 vm03 ceph-mon[50703]: Reconfiguring daemon mon.a on vm03 2026-03-10T08:34:34.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:34 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.106:0/3554382456' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T08:34:34.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:34 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:34.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:34 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:34.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:34 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T08:34:34.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:34 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T08:34:34.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:34 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:34:34.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:34 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:34.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:34 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:34.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:34 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T08:34:34.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:34 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T08:34:34.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:34 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:34:34.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:34 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/901366949' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:34:34.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:34 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:34.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:34 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:34.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:34 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T08:34:34.529 INFO:teuthology.orchestra.run.vm06.stdout:Scheduled mgr update... 2026-03-10T08:34:34.587 DEBUG:teuthology.orchestra.run.vm06:mgr.x> sudo journalctl -f -n 0 -u ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@mgr.x.service 2026-03-10T08:34:34.588 INFO:tasks.cephadm:Deploying OSDs... 2026-03-10T08:34:34.588 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-10T08:34:34.588 DEBUG:teuthology.orchestra.run.vm03:> dd if=/scratch_devs of=/dev/stdout 2026-03-10T08:34:34.611 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T08:34:34.611 DEBUG:teuthology.orchestra.run.vm03:> ls /dev/[sv]d? 2026-03-10T08:34:34.671 INFO:teuthology.orchestra.run.vm03.stdout:/dev/vda 2026-03-10T08:34:34.671 INFO:teuthology.orchestra.run.vm03.stdout:/dev/vdb 2026-03-10T08:34:34.672 INFO:teuthology.orchestra.run.vm03.stdout:/dev/vdc 2026-03-10T08:34:34.672 INFO:teuthology.orchestra.run.vm03.stdout:/dev/vdd 2026-03-10T08:34:34.672 INFO:teuthology.orchestra.run.vm03.stdout:/dev/vde 2026-03-10T08:34:34.672 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-10T08:34:34.672 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-10T08:34:34.672 DEBUG:teuthology.orchestra.run.vm03:> stat /dev/vdb 2026-03-10T08:34:34.732 INFO:teuthology.orchestra.run.vm03.stdout: File: /dev/vdb 2026-03-10T08:34:34.732 INFO:teuthology.orchestra.run.vm03.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T08:34:34.732 INFO:teuthology.orchestra.run.vm03.stdout:Device: 6h/6d Inode: 221 Links: 1 Device type: fc,10 2026-03-10T08:34:34.732 INFO:teuthology.orchestra.run.vm03.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T08:34:34.732 INFO:teuthology.orchestra.run.vm03.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T08:34:34.732 INFO:teuthology.orchestra.run.vm03.stdout:Access: 2026-03-10 08:34:13.557614898 +0000 2026-03-10T08:34:34.732 INFO:teuthology.orchestra.run.vm03.stdout:Modify: 2026-03-10 08:31:20.951207537 +0000 2026-03-10T08:34:34.732 INFO:teuthology.orchestra.run.vm03.stdout:Change: 2026-03-10 08:31:20.951207537 +0000 2026-03-10T08:34:34.732 INFO:teuthology.orchestra.run.vm03.stdout: Birth: 2026-03-10 08:28:30.233000000 +0000 2026-03-10T08:34:34.732 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-10T08:34:34.804 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records in 2026-03-10T08:34:34.804 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records out 2026-03-10T08:34:34.804 INFO:teuthology.orchestra.run.vm03.stderr:512 bytes copied, 0.000114154 s, 4.5 MB/s 2026-03-10T08:34:34.805 DEBUG:teuthology.orchestra.run.vm03:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-10T08:34:34.865 DEBUG:teuthology.orchestra.run.vm03:> stat /dev/vdc 2026-03-10T08:34:34.924 INFO:teuthology.orchestra.run.vm03.stdout: File: /dev/vdc 2026-03-10T08:34:34.924 INFO:teuthology.orchestra.run.vm03.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T08:34:34.924 INFO:teuthology.orchestra.run.vm03.stdout:Device: 6h/6d Inode: 222 Links: 1 Device type: fc,20 2026-03-10T08:34:34.924 INFO:teuthology.orchestra.run.vm03.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T08:34:34.924 INFO:teuthology.orchestra.run.vm03.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T08:34:34.924 INFO:teuthology.orchestra.run.vm03.stdout:Access: 2026-03-10 08:34:13.597614914 +0000 2026-03-10T08:34:34.924 INFO:teuthology.orchestra.run.vm03.stdout:Modify: 2026-03-10 08:31:20.926207491 +0000 2026-03-10T08:34:34.924 INFO:teuthology.orchestra.run.vm03.stdout:Change: 2026-03-10 08:31:20.926207491 +0000 2026-03-10T08:34:34.924 INFO:teuthology.orchestra.run.vm03.stdout: Birth: 2026-03-10 08:28:30.234000000 +0000 2026-03-10T08:34:34.924 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-10T08:34:34.992 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records in 2026-03-10T08:34:34.992 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records out 2026-03-10T08:34:34.992 INFO:teuthology.orchestra.run.vm03.stderr:512 bytes copied, 0.000151733 s, 3.4 MB/s 2026-03-10T08:34:34.993 DEBUG:teuthology.orchestra.run.vm03:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-10T08:34:35.055 DEBUG:teuthology.orchestra.run.vm03:> stat /dev/vdd 2026-03-10T08:34:35.067 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: Reconfiguring mon.c (monmap changed)... 2026-03-10T08:34:35.068 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: Reconfiguring daemon mon.c on vm03 2026-03-10T08:34:35.068 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: Reconfiguring mon.b (monmap changed)... 2026-03-10T08:34:35.068 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: Reconfiguring daemon mon.b on vm06 2026-03-10T08:34:35.068 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: mon.b calling monitor election 2026-03-10T08:34:35.068 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T08:34:35.068 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: mon.a calling monitor election 2026-03-10T08:34:35.068 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T08:34:35.068 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T08:34:35.068 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: Reconfiguring mon.a (monmap changed)... 2026-03-10T08:34:35.068 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T08:34:35.068 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T08:34:35.068 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T08:34:35.068 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T08:34:35.068 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T08:34:35.068 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T08:34:35.068 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T08:34:35.068 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T08:34:35.068 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-10T08:34:35.068 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: monmap epoch 3 2026-03-10T08:34:35.068 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 2026-03-10T08:34:35.068 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: last_changed 2026-03-10T08:34:28.060509+0000 2026-03-10T08:34:35.068 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: created 2026-03-10T08:33:49.085668+0000 2026-03-10T08:34:35.068 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: min_mon_release 19 (squid) 2026-03-10T08:34:35.068 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: election_strategy: 1 2026-03-10T08:34:35.068 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: 0: v1:192.168.123.103:6789/0 mon.a 2026-03-10T08:34:35.068 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: 1: v1:192.168.123.106:6789/0 mon.b 2026-03-10T08:34:35.068 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: 2: v1:192.168.123.103:6790/0 mon.c 2026-03-10T08:34:35.068 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: fsmap 2026-03-10T08:34:35.068 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: osdmap e4: 0 total, 0 up, 0 in 2026-03-10T08:34:35.068 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: mgrmap e13: y(active, since 22s) 2026-03-10T08:34:35.068 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: overall HEALTH_OK 2026-03-10T08:34:35.068 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T08:34:35.068 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:34:35.068 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:35.068 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: Reconfiguring daemon mon.a on vm03 2026-03-10T08:34:35.068 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.106:0/3554382456' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T08:34:35.068 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:35.068 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:35.068 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T08:34:35.068 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T08:34:35.068 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:34:35.068 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:35.068 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:35.069 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T08:34:35.069 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T08:34:35.069 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:34:35.069 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/901366949' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:34:35.069 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:35.069 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:35.069 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T08:34:35.093 INFO:teuthology.orchestra.run.vm03.stdout: File: /dev/vdd 2026-03-10T08:34:35.093 INFO:teuthology.orchestra.run.vm03.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T08:34:35.093 INFO:teuthology.orchestra.run.vm03.stdout:Device: 6h/6d Inode: 223 Links: 1 Device type: fc,30 2026-03-10T08:34:35.093 INFO:teuthology.orchestra.run.vm03.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T08:34:35.093 INFO:teuthology.orchestra.run.vm03.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T08:34:35.093 INFO:teuthology.orchestra.run.vm03.stdout:Access: 2026-03-10 08:34:13.627614925 +0000 2026-03-10T08:34:35.093 INFO:teuthology.orchestra.run.vm03.stdout:Modify: 2026-03-10 08:31:20.948207531 +0000 2026-03-10T08:34:35.093 INFO:teuthology.orchestra.run.vm03.stdout:Change: 2026-03-10 08:31:20.948207531 +0000 2026-03-10T08:34:35.093 INFO:teuthology.orchestra.run.vm03.stdout: Birth: 2026-03-10 08:28:30.237000000 +0000 2026-03-10T08:34:35.093 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-10T08:34:35.162 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records in 2026-03-10T08:34:35.162 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records out 2026-03-10T08:34:35.162 INFO:teuthology.orchestra.run.vm03.stderr:512 bytes copied, 0.000189645 s, 2.7 MB/s 2026-03-10T08:34:35.163 DEBUG:teuthology.orchestra.run.vm03:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-10T08:34:35.224 DEBUG:teuthology.orchestra.run.vm03:> stat /dev/vde 2026-03-10T08:34:35.288 INFO:teuthology.orchestra.run.vm03.stdout: File: /dev/vde 2026-03-10T08:34:35.288 INFO:teuthology.orchestra.run.vm03.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T08:34:35.288 INFO:teuthology.orchestra.run.vm03.stdout:Device: 6h/6d Inode: 224 Links: 1 Device type: fc,40 2026-03-10T08:34:35.288 INFO:teuthology.orchestra.run.vm03.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T08:34:35.288 INFO:teuthology.orchestra.run.vm03.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T08:34:35.288 INFO:teuthology.orchestra.run.vm03.stdout:Access: 2026-03-10 08:34:13.657614937 +0000 2026-03-10T08:34:35.288 INFO:teuthology.orchestra.run.vm03.stdout:Modify: 2026-03-10 08:31:20.959207552 +0000 2026-03-10T08:34:35.288 INFO:teuthology.orchestra.run.vm03.stdout:Change: 2026-03-10 08:31:20.959207552 +0000 2026-03-10T08:34:35.288 INFO:teuthology.orchestra.run.vm03.stdout: Birth: 2026-03-10 08:28:30.239000000 +0000 2026-03-10T08:34:35.289 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-10T08:34:35.359 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records in 2026-03-10T08:34:35.359 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records out 2026-03-10T08:34:35.359 INFO:teuthology.orchestra.run.vm03.stderr:512 bytes copied, 0.000139501 s, 3.7 MB/s 2026-03-10T08:34:35.361 DEBUG:teuthology.orchestra.run.vm03:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-10T08:34:35.421 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-10T08:34:35.421 DEBUG:teuthology.orchestra.run.vm06:> dd if=/scratch_devs of=/dev/stdout 2026-03-10T08:34:35.441 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T08:34:35.441 DEBUG:teuthology.orchestra.run.vm06:> ls /dev/[sv]d? 2026-03-10T08:34:35.498 INFO:teuthology.orchestra.run.vm06.stdout:/dev/vda 2026-03-10T08:34:35.498 INFO:teuthology.orchestra.run.vm06.stdout:/dev/vdb 2026-03-10T08:34:35.498 INFO:teuthology.orchestra.run.vm06.stdout:/dev/vdc 2026-03-10T08:34:35.498 INFO:teuthology.orchestra.run.vm06.stdout:/dev/vdd 2026-03-10T08:34:35.498 INFO:teuthology.orchestra.run.vm06.stdout:/dev/vde 2026-03-10T08:34:35.498 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-10T08:34:35.498 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-10T08:34:35.498 DEBUG:teuthology.orchestra.run.vm06:> stat /dev/vdb 2026-03-10T08:34:35.560 INFO:teuthology.orchestra.run.vm06.stdout: File: /dev/vdb 2026-03-10T08:34:35.560 INFO:teuthology.orchestra.run.vm06.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T08:34:35.560 INFO:teuthology.orchestra.run.vm06.stdout:Device: 6h/6d Inode: 254 Links: 1 Device type: fc,10 2026-03-10T08:34:35.560 INFO:teuthology.orchestra.run.vm06.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T08:34:35.560 INFO:teuthology.orchestra.run.vm06.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T08:34:35.560 INFO:teuthology.orchestra.run.vm06.stdout:Access: 2026-03-10 08:34:18.826276843 +0000 2026-03-10T08:34:35.560 INFO:teuthology.orchestra.run.vm06.stdout:Modify: 2026-03-10 08:31:20.776898477 +0000 2026-03-10T08:34:35.560 INFO:teuthology.orchestra.run.vm06.stdout:Change: 2026-03-10 08:31:20.776898477 +0000 2026-03-10T08:34:35.560 INFO:teuthology.orchestra.run.vm06.stdout: Birth: 2026-03-10 08:29:01.229000000 +0000 2026-03-10T08:34:35.560 DEBUG:teuthology.orchestra.run.vm06:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-10T08:34:35.638 INFO:teuthology.orchestra.run.vm06.stderr:1+0 records in 2026-03-10T08:34:35.638 INFO:teuthology.orchestra.run.vm06.stderr:1+0 records out 2026-03-10T08:34:35.638 INFO:teuthology.orchestra.run.vm06.stderr:512 bytes copied, 0.000119524 s, 4.3 MB/s 2026-03-10T08:34:35.639 DEBUG:teuthology.orchestra.run.vm06:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-10T08:34:35.702 DEBUG:teuthology.orchestra.run.vm06:> stat /dev/vdc 2026-03-10T08:34:35.828 INFO:teuthology.orchestra.run.vm06.stdout: File: /dev/vdc 2026-03-10T08:34:35.828 INFO:teuthology.orchestra.run.vm06.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T08:34:35.829 INFO:teuthology.orchestra.run.vm06.stdout:Device: 6h/6d Inode: 255 Links: 1 Device type: fc,20 2026-03-10T08:34:35.829 INFO:teuthology.orchestra.run.vm06.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T08:34:35.829 INFO:teuthology.orchestra.run.vm06.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T08:34:35.829 INFO:teuthology.orchestra.run.vm06.stdout:Access: 2026-03-10 08:34:18.871276878 +0000 2026-03-10T08:34:35.829 INFO:teuthology.orchestra.run.vm06.stdout:Modify: 2026-03-10 08:31:20.778898478 +0000 2026-03-10T08:34:35.829 INFO:teuthology.orchestra.run.vm06.stdout:Change: 2026-03-10 08:31:20.778898478 +0000 2026-03-10T08:34:35.829 INFO:teuthology.orchestra.run.vm06.stdout: Birth: 2026-03-10 08:29:01.234000000 +0000 2026-03-10T08:34:35.829 DEBUG:teuthology.orchestra.run.vm06:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-10T08:34:35.920 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:35 vm06 ceph-mon[54477]: mon.c calling monitor election 2026-03-10T08:34:35.920 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:35 vm06 ceph-mon[54477]: from='client.24118 v1:192.168.123.106:0/4056650733' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm03=y;vm06=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:34:35.920 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:35 vm06 ceph-mon[54477]: Saving service mgr spec with placement vm03=y;vm06=x;count:2 2026-03-10T08:34:35.920 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:35 vm06 ceph-mon[54477]: Updating vm03:/etc/ceph/ceph.conf 2026-03-10T08:34:35.920 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:35 vm06 ceph-mon[54477]: Updating vm06:/etc/ceph/ceph.conf 2026-03-10T08:34:35.920 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:35 vm06 ceph-mon[54477]: Updating vm03:/var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/config/ceph.conf 2026-03-10T08:34:35.920 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:35 vm06 ceph-mon[54477]: Updating vm06:/var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/config/ceph.conf 2026-03-10T08:34:35.920 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:35 vm06 ceph-mon[54477]: Deploying daemon mgr.x on vm06 2026-03-10T08:34:35.920 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:35 vm06 ceph-mon[54477]: mon.c calling monitor election 2026-03-10T08:34:35.920 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:35 vm06 ceph-mon[54477]: mon.b calling monitor election 2026-03-10T08:34:35.920 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:35 vm06 ceph-mon[54477]: mon.a calling monitor election 2026-03-10T08:34:35.920 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:35 vm06 ceph-mon[54477]: mon.a is new leader, mons a,b,c in quorum (ranks 0,1,2) 2026-03-10T08:34:35.920 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:35 vm06 ceph-mon[54477]: monmap epoch 3 2026-03-10T08:34:35.920 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:35 vm06 ceph-mon[54477]: fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 2026-03-10T08:34:35.920 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:35 vm06 ceph-mon[54477]: last_changed 2026-03-10T08:34:28.060509+0000 2026-03-10T08:34:35.920 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:35 vm06 ceph-mon[54477]: created 2026-03-10T08:33:49.085668+0000 2026-03-10T08:34:35.920 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:35 vm06 ceph-mon[54477]: min_mon_release 19 (squid) 2026-03-10T08:34:35.920 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:35 vm06 ceph-mon[54477]: election_strategy: 1 2026-03-10T08:34:35.920 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:35 vm06 ceph-mon[54477]: 0: v1:192.168.123.103:6789/0 mon.a 2026-03-10T08:34:35.920 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:35 vm06 ceph-mon[54477]: 1: v1:192.168.123.106:6789/0 mon.b 2026-03-10T08:34:35.920 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:35 vm06 ceph-mon[54477]: 2: v1:192.168.123.103:6790/0 mon.c 2026-03-10T08:34:35.920 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:35 vm06 ceph-mon[54477]: fsmap 2026-03-10T08:34:35.920 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:35 vm06 ceph-mon[54477]: osdmap e4: 0 total, 0 up, 0 in 2026-03-10T08:34:35.920 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:35 vm06 ceph-mon[54477]: mgrmap e13: y(active, since 24s) 2026-03-10T08:34:35.920 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:35 vm06 ceph-mon[54477]: overall HEALTH_OK 2026-03-10T08:34:35.920 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:35 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:35.920 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:35 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:35.920 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:35 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:35.920 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:35 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:35.920 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:35 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:34:35.924 INFO:teuthology.orchestra.run.vm06.stderr:1+0 records in 2026-03-10T08:34:35.924 INFO:teuthology.orchestra.run.vm06.stderr:1+0 records out 2026-03-10T08:34:35.924 INFO:teuthology.orchestra.run.vm06.stderr:512 bytes copied, 0.000146153 s, 3.5 MB/s 2026-03-10T08:34:35.925 DEBUG:teuthology.orchestra.run.vm06:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-10T08:34:35.977 DEBUG:teuthology.orchestra.run.vm06:> stat /dev/vdd 2026-03-10T08:34:36.020 INFO:teuthology.orchestra.run.vm06.stdout: File: /dev/vdd 2026-03-10T08:34:36.020 INFO:teuthology.orchestra.run.vm06.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T08:34:36.020 INFO:teuthology.orchestra.run.vm06.stdout:Device: 6h/6d Inode: 256 Links: 1 Device type: fc,30 2026-03-10T08:34:36.020 INFO:teuthology.orchestra.run.vm06.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T08:34:36.020 INFO:teuthology.orchestra.run.vm06.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T08:34:36.020 INFO:teuthology.orchestra.run.vm06.stdout:Access: 2026-03-10 08:34:18.905276904 +0000 2026-03-10T08:34:36.020 INFO:teuthology.orchestra.run.vm06.stdout:Modify: 2026-03-10 08:31:20.826898482 +0000 2026-03-10T08:34:36.020 INFO:teuthology.orchestra.run.vm06.stdout:Change: 2026-03-10 08:31:20.826898482 +0000 2026-03-10T08:34:36.020 INFO:teuthology.orchestra.run.vm06.stdout: Birth: 2026-03-10 08:29:01.240000000 +0000 2026-03-10T08:34:36.020 DEBUG:teuthology.orchestra.run.vm06:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-10T08:34:36.145 INFO:teuthology.orchestra.run.vm06.stderr:1+0 records in 2026-03-10T08:34:36.145 INFO:teuthology.orchestra.run.vm06.stderr:1+0 records out 2026-03-10T08:34:36.145 INFO:teuthology.orchestra.run.vm06.stderr:512 bytes copied, 0.00013905 s, 3.7 MB/s 2026-03-10T08:34:36.147 DEBUG:teuthology.orchestra.run.vm06:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-10T08:34:36.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: mon.c calling monitor election 2026-03-10T08:34:36.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: from='client.24118 v1:192.168.123.106:0/4056650733' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm03=y;vm06=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:34:36.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: Saving service mgr spec with placement vm03=y;vm06=x;count:2 2026-03-10T08:34:36.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: Updating vm03:/etc/ceph/ceph.conf 2026-03-10T08:34:36.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: Updating vm06:/etc/ceph/ceph.conf 2026-03-10T08:34:36.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: Updating vm03:/var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/config/ceph.conf 2026-03-10T08:34:36.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: Updating vm06:/var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/config/ceph.conf 2026-03-10T08:34:36.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: Deploying daemon mgr.x on vm06 2026-03-10T08:34:36.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: mon.c calling monitor election 2026-03-10T08:34:36.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: mon.b calling monitor election 2026-03-10T08:34:36.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: mon.a calling monitor election 2026-03-10T08:34:36.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: mon.a is new leader, mons a,b,c in quorum (ranks 0,1,2) 2026-03-10T08:34:36.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: monmap epoch 3 2026-03-10T08:34:36.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 2026-03-10T08:34:36.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: last_changed 2026-03-10T08:34:28.060509+0000 2026-03-10T08:34:36.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: created 2026-03-10T08:33:49.085668+0000 2026-03-10T08:34:36.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: min_mon_release 19 (squid) 2026-03-10T08:34:36.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: election_strategy: 1 2026-03-10T08:34:36.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: 0: v1:192.168.123.103:6789/0 mon.a 2026-03-10T08:34:36.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: 1: v1:192.168.123.106:6789/0 mon.b 2026-03-10T08:34:36.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: 2: v1:192.168.123.103:6790/0 mon.c 2026-03-10T08:34:36.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: fsmap 2026-03-10T08:34:36.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: osdmap e4: 0 total, 0 up, 0 in 2026-03-10T08:34:36.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: mgrmap e13: y(active, since 24s) 2026-03-10T08:34:36.179 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: overall HEALTH_OK 2026-03-10T08:34:36.179 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:36.179 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:36.179 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:36.179 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:36.179 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:34:36.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[50703]: mon.c calling monitor election 2026-03-10T08:34:36.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[50703]: from='client.24118 v1:192.168.123.106:0/4056650733' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm03=y;vm06=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:34:36.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[50703]: Saving service mgr spec with placement vm03=y;vm06=x;count:2 2026-03-10T08:34:36.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[50703]: Updating vm03:/etc/ceph/ceph.conf 2026-03-10T08:34:36.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[50703]: Updating vm06:/etc/ceph/ceph.conf 2026-03-10T08:34:36.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[50703]: Updating vm03:/var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/config/ceph.conf 2026-03-10T08:34:36.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[50703]: Updating vm06:/var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/config/ceph.conf 2026-03-10T08:34:36.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[50703]: Deploying daemon mgr.x on vm06 2026-03-10T08:34:36.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[50703]: mon.c calling monitor election 2026-03-10T08:34:36.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[50703]: mon.b calling monitor election 2026-03-10T08:34:36.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[50703]: mon.a calling monitor election 2026-03-10T08:34:36.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[50703]: mon.a is new leader, mons a,b,c in quorum (ranks 0,1,2) 2026-03-10T08:34:36.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[50703]: monmap epoch 3 2026-03-10T08:34:36.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[50703]: fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 2026-03-10T08:34:36.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[50703]: last_changed 2026-03-10T08:34:28.060509+0000 2026-03-10T08:34:36.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[50703]: created 2026-03-10T08:33:49.085668+0000 2026-03-10T08:34:36.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[50703]: min_mon_release 19 (squid) 2026-03-10T08:34:36.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[50703]: election_strategy: 1 2026-03-10T08:34:36.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[50703]: 0: v1:192.168.123.103:6789/0 mon.a 2026-03-10T08:34:36.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[50703]: 1: v1:192.168.123.106:6789/0 mon.b 2026-03-10T08:34:36.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[50703]: 2: v1:192.168.123.103:6790/0 mon.c 2026-03-10T08:34:36.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[50703]: fsmap 2026-03-10T08:34:36.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[50703]: osdmap e4: 0 total, 0 up, 0 in 2026-03-10T08:34:36.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[50703]: mgrmap e13: y(active, since 24s) 2026-03-10T08:34:36.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[50703]: overall HEALTH_OK 2026-03-10T08:34:36.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:36.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:36.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:36.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:36.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:35 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:34:36.193 DEBUG:teuthology.orchestra.run.vm06:> stat /dev/vde 2026-03-10T08:34:36.237 INFO:teuthology.orchestra.run.vm06.stdout: File: /dev/vde 2026-03-10T08:34:36.237 INFO:teuthology.orchestra.run.vm06.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T08:34:36.237 INFO:teuthology.orchestra.run.vm06.stdout:Device: 6h/6d Inode: 257 Links: 1 Device type: fc,40 2026-03-10T08:34:36.237 INFO:teuthology.orchestra.run.vm06.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T08:34:36.237 INFO:teuthology.orchestra.run.vm06.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T08:34:36.237 INFO:teuthology.orchestra.run.vm06.stdout:Access: 2026-03-10 08:34:18.933276926 +0000 2026-03-10T08:34:36.237 INFO:teuthology.orchestra.run.vm06.stdout:Modify: 2026-03-10 08:31:20.833898483 +0000 2026-03-10T08:34:36.237 INFO:teuthology.orchestra.run.vm06.stdout:Change: 2026-03-10 08:31:20.833898483 +0000 2026-03-10T08:34:36.237 INFO:teuthology.orchestra.run.vm06.stdout: Birth: 2026-03-10 08:29:01.245000000 +0000 2026-03-10T08:34:36.238 DEBUG:teuthology.orchestra.run.vm06:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-10T08:34:36.288 INFO:teuthology.orchestra.run.vm06.stderr:1+0 records in 2026-03-10T08:34:36.288 INFO:teuthology.orchestra.run.vm06.stderr:1+0 records out 2026-03-10T08:34:36.288 INFO:teuthology.orchestra.run.vm06.stderr:512 bytes copied, 0.000155783 s, 3.3 MB/s 2026-03-10T08:34:36.289 DEBUG:teuthology.orchestra.run.vm06:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-10T08:34:36.310 INFO:tasks.cephadm:Deploying osd.0 on vm03 with /dev/vde... 2026-03-10T08:34:36.310 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- lvm zap /dev/vde 2026-03-10T08:34:36.494 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.a/config 2026-03-10T08:34:36.590 INFO:journalctl@ceph.mgr.x.vm06.stdout:Mar 10 08:34:36 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-x[56265]: 2026-03-10T08:34:36.295+0000 7f414ccf8140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T08:34:36.848 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:36 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T08:34:36.849 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:36 vm03 ceph-mon[50703]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T08:34:36.849 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:36 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:36.849 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:36 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:36.849 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:36 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:34:36.849 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:36 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:34:36.849 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:36 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:36.849 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:36 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T08:34:36.849 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:36 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T08:34:36.849 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:36 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:34:36.849 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:36 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:36.849 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:36 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:36.849 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:36 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:34:36.849 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:36 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:34:36.849 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:36 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:34:36.849 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:36 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:36.849 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:36 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T08:34:36.849 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:36 vm03 ceph-mon[57160]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T08:34:36.849 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:36 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:36.849 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:36 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:36.849 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:36 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:34:36.849 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:36 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:34:36.849 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:36 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:36.849 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:36 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T08:34:36.849 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:36 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T08:34:36.849 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:36 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:34:36.849 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:36 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:36.849 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:36 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:36.849 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:36 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:34:36.849 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:36 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:34:36.849 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:36 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:34:36.849 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:36 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:37.089 INFO:journalctl@ceph.mgr.x.vm06.stdout:Mar 10 08:34:36 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-x[56265]: 2026-03-10T08:34:36.656+0000 7f414ccf8140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T08:34:37.089 INFO:journalctl@ceph.mgr.x.vm06.stdout:Mar 10 08:34:36 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-x[56265]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T08:34:37.089 INFO:journalctl@ceph.mgr.x.vm06.stdout:Mar 10 08:34:36 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-x[56265]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T08:34:37.089 INFO:journalctl@ceph.mgr.x.vm06.stdout:Mar 10 08:34:36 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-x[56265]: from numpy import show_config as show_numpy_config 2026-03-10T08:34:37.089 INFO:journalctl@ceph.mgr.x.vm06.stdout:Mar 10 08:34:36 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-x[56265]: 2026-03-10T08:34:36.757+0000 7f414ccf8140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T08:34:37.089 INFO:journalctl@ceph.mgr.x.vm06.stdout:Mar 10 08:34:36 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-x[56265]: 2026-03-10T08:34:36.795+0000 7f414ccf8140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T08:34:37.089 INFO:journalctl@ceph.mgr.x.vm06.stdout:Mar 10 08:34:36 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-x[56265]: 2026-03-10T08:34:36.875+0000 7f414ccf8140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T08:34:37.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:36 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T08:34:37.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:36 vm06 ceph-mon[54477]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T08:34:37.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:36 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:37.090 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:36 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:37.090 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:36 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:34:37.090 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:36 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:34:37.090 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:36 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:37.090 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:36 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T08:34:37.090 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:36 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T08:34:37.090 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:36 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:34:37.090 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:36 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:37.090 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:36 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:37.090 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:36 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:34:37.090 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:36 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:34:37.090 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:36 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:34:37.090 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:36 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:37.178 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:34:37 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:34:37.039+0000 7faf6a5f1640 -1 mgr.server handle_report got status from non-daemon mon.c 2026-03-10T08:34:37.417 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:34:37.437 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- ceph orch daemon add osd vm03:/dev/vde 2026-03-10T08:34:37.619 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.a/config 2026-03-10T08:34:37.669 INFO:journalctl@ceph.mgr.x.vm06.stdout:Mar 10 08:34:37 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-x[56265]: 2026-03-10T08:34:37.413+0000 7f414ccf8140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T08:34:37.669 INFO:journalctl@ceph.mgr.x.vm06.stdout:Mar 10 08:34:37 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-x[56265]: 2026-03-10T08:34:37.540+0000 7f414ccf8140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T08:34:37.669 INFO:journalctl@ceph.mgr.x.vm06.stdout:Mar 10 08:34:37 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-x[56265]: 2026-03-10T08:34:37.586+0000 7f414ccf8140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T08:34:37.669 INFO:journalctl@ceph.mgr.x.vm06.stdout:Mar 10 08:34:37 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-x[56265]: 2026-03-10T08:34:37.624+0000 7f414ccf8140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T08:34:37.944 INFO:journalctl@ceph.mgr.x.vm06.stdout:Mar 10 08:34:37 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-x[56265]: 2026-03-10T08:34:37.668+0000 7f414ccf8140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T08:34:37.944 INFO:journalctl@ceph.mgr.x.vm06.stdout:Mar 10 08:34:37 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-x[56265]: 2026-03-10T08:34:37.708+0000 7f414ccf8140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T08:34:37.944 INFO:journalctl@ceph.mgr.x.vm06.stdout:Mar 10 08:34:37 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-x[56265]: 2026-03-10T08:34:37.887+0000 7f414ccf8140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T08:34:38.249 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:37 vm03 ceph-mon[50703]: Reconfiguring mgr.y (unknown last config time)... 2026-03-10T08:34:38.249 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:37 vm03 ceph-mon[50703]: Reconfiguring daemon mgr.y on vm03 2026-03-10T08:34:38.249 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:37 vm03 ceph-mon[57160]: Reconfiguring mgr.y (unknown last config time)... 2026-03-10T08:34:38.249 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:37 vm03 ceph-mon[57160]: Reconfiguring daemon mgr.y on vm03 2026-03-10T08:34:38.339 INFO:journalctl@ceph.mgr.x.vm06.stdout:Mar 10 08:34:37 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-x[56265]: 2026-03-10T08:34:37.943+0000 7f414ccf8140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T08:34:38.339 INFO:journalctl@ceph.mgr.x.vm06.stdout:Mar 10 08:34:38 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-x[56265]: 2026-03-10T08:34:38.185+0000 7f414ccf8140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T08:34:38.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:37 vm06 ceph-mon[54477]: Reconfiguring mgr.y (unknown last config time)... 2026-03-10T08:34:38.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:37 vm06 ceph-mon[54477]: Reconfiguring daemon mgr.y on vm03 2026-03-10T08:34:38.795 INFO:journalctl@ceph.mgr.x.vm06.stdout:Mar 10 08:34:38 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-x[56265]: 2026-03-10T08:34:38.495+0000 7f414ccf8140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T08:34:38.795 INFO:journalctl@ceph.mgr.x.vm06.stdout:Mar 10 08:34:38 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-x[56265]: 2026-03-10T08:34:38.535+0000 7f414ccf8140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T08:34:38.795 INFO:journalctl@ceph.mgr.x.vm06.stdout:Mar 10 08:34:38 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-x[56265]: 2026-03-10T08:34:38.580+0000 7f414ccf8140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T08:34:38.795 INFO:journalctl@ceph.mgr.x.vm06.stdout:Mar 10 08:34:38 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-x[56265]: 2026-03-10T08:34:38.664+0000 7f414ccf8140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T08:34:38.795 INFO:journalctl@ceph.mgr.x.vm06.stdout:Mar 10 08:34:38 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-x[56265]: 2026-03-10T08:34:38.705+0000 7f414ccf8140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T08:34:39.078 INFO:journalctl@ceph.mgr.x.vm06.stdout:Mar 10 08:34:38 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-x[56265]: 2026-03-10T08:34:38.794+0000 7f414ccf8140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T08:34:39.078 INFO:journalctl@ceph.mgr.x.vm06.stdout:Mar 10 08:34:38 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-x[56265]: 2026-03-10T08:34:38.917+0000 7f414ccf8140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T08:34:39.079 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:38 vm06 ceph-mon[54477]: from='client.24110 v1:192.168.123.103:0/967242203' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:34:39.079 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:38 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T08:34:39.079 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:38 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T08:34:39.079 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:38 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:34:39.079 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:38 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:39.079 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:38 vm06 ceph-mon[54477]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T08:34:39.079 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:38 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/1142769959' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e35097a3-7591-438f-bdeb-8055d54142a8"}]: dispatch 2026-03-10T08:34:39.079 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:38 vm06 ceph-mon[54477]: from='client.24136 ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e35097a3-7591-438f-bdeb-8055d54142a8"}]: dispatch 2026-03-10T08:34:39.079 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:38 vm06 ceph-mon[54477]: from='client.24136 ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "e35097a3-7591-438f-bdeb-8055d54142a8"}]': finished 2026-03-10T08:34:39.079 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:38 vm06 ceph-mon[54477]: osdmap e5: 1 total, 0 up, 1 in 2026-03-10T08:34:39.079 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:38 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T08:34:39.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:38 vm03 ceph-mon[50703]: from='client.24110 v1:192.168.123.103:0/967242203' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:34:39.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:38 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T08:34:39.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:38 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T08:34:39.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:38 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:34:39.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:38 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:39.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:38 vm03 ceph-mon[50703]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T08:34:39.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:38 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/1142769959' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e35097a3-7591-438f-bdeb-8055d54142a8"}]: dispatch 2026-03-10T08:34:39.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:38 vm03 ceph-mon[50703]: from='client.24136 ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e35097a3-7591-438f-bdeb-8055d54142a8"}]: dispatch 2026-03-10T08:34:39.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:38 vm03 ceph-mon[50703]: from='client.24136 ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "e35097a3-7591-438f-bdeb-8055d54142a8"}]': finished 2026-03-10T08:34:39.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:38 vm03 ceph-mon[50703]: osdmap e5: 1 total, 0 up, 1 in 2026-03-10T08:34:39.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:38 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T08:34:39.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:38 vm03 ceph-mon[57160]: from='client.24110 v1:192.168.123.103:0/967242203' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:34:39.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:38 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T08:34:39.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:38 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T08:34:39.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:38 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:34:39.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:38 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:39.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:38 vm03 ceph-mon[57160]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T08:34:39.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:38 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/1142769959' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e35097a3-7591-438f-bdeb-8055d54142a8"}]: dispatch 2026-03-10T08:34:39.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:38 vm03 ceph-mon[57160]: from='client.24136 ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e35097a3-7591-438f-bdeb-8055d54142a8"}]: dispatch 2026-03-10T08:34:39.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:38 vm03 ceph-mon[57160]: from='client.24136 ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "e35097a3-7591-438f-bdeb-8055d54142a8"}]': finished 2026-03-10T08:34:39.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:38 vm03 ceph-mon[57160]: osdmap e5: 1 total, 0 up, 1 in 2026-03-10T08:34:39.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:38 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T08:34:39.339 INFO:journalctl@ceph.mgr.x.vm06.stdout:Mar 10 08:34:39 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-x[56265]: 2026-03-10T08:34:39.077+0000 7f414ccf8140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T08:34:39.339 INFO:journalctl@ceph.mgr.x.vm06.stdout:Mar 10 08:34:39 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-x[56265]: 2026-03-10T08:34:39.117+0000 7f414ccf8140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T08:34:40.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:39 vm06 ceph-mon[54477]: Standby manager daemon x started 2026-03-10T08:34:40.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:39 vm06 ceph-mon[54477]: from='mgr.? v1:192.168.123.106:0/372961350' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T08:34:40.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:39 vm06 ceph-mon[54477]: from='mgr.? v1:192.168.123.106:0/372961350' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T08:34:40.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:39 vm06 ceph-mon[54477]: from='mgr.? v1:192.168.123.106:0/372961350' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T08:34:40.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:39 vm06 ceph-mon[54477]: from='mgr.? v1:192.168.123.106:0/372961350' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T08:34:40.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:39 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/1645138808' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T08:34:40.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:39 vm03 ceph-mon[50703]: Standby manager daemon x started 2026-03-10T08:34:40.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:39 vm03 ceph-mon[50703]: from='mgr.? v1:192.168.123.106:0/372961350' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T08:34:40.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:39 vm03 ceph-mon[50703]: from='mgr.? v1:192.168.123.106:0/372961350' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T08:34:40.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:39 vm03 ceph-mon[50703]: from='mgr.? v1:192.168.123.106:0/372961350' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T08:34:40.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:39 vm03 ceph-mon[50703]: from='mgr.? v1:192.168.123.106:0/372961350' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T08:34:40.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:39 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/1645138808' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T08:34:40.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:39 vm03 ceph-mon[57160]: Standby manager daemon x started 2026-03-10T08:34:40.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:39 vm03 ceph-mon[57160]: from='mgr.? v1:192.168.123.106:0/372961350' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T08:34:40.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:39 vm03 ceph-mon[57160]: from='mgr.? v1:192.168.123.106:0/372961350' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T08:34:40.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:39 vm03 ceph-mon[57160]: from='mgr.? v1:192.168.123.106:0/372961350' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T08:34:40.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:39 vm03 ceph-mon[57160]: from='mgr.? v1:192.168.123.106:0/372961350' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T08:34:40.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:39 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/1645138808' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T08:34:41.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:40 vm06 ceph-mon[54477]: mgrmap e14: y(active, since 29s), standbys: x 2026-03-10T08:34:41.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:40 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T08:34:41.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:40 vm06 ceph-mon[54477]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T08:34:41.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:40 vm03 ceph-mon[57160]: mgrmap e14: y(active, since 29s), standbys: x 2026-03-10T08:34:41.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:40 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T08:34:41.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:40 vm03 ceph-mon[57160]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T08:34:41.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:40 vm03 ceph-mon[50703]: mgrmap e14: y(active, since 29s), standbys: x 2026-03-10T08:34:41.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:40 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T08:34:41.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:40 vm03 ceph-mon[50703]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T08:34:42.571 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:42 vm03 ceph-mon[50703]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T08:34:42.571 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:42 vm03 ceph-mon[57160]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T08:34:42.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:42 vm06 ceph-mon[54477]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T08:34:43.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:43 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T08:34:43.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:43 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:34:43.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:43 vm03 ceph-mon[50703]: Deploying daemon osd.0 on vm03 2026-03-10T08:34:43.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:43 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T08:34:43.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:43 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:34:43.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:43 vm03 ceph-mon[57160]: Deploying daemon osd.0 on vm03 2026-03-10T08:34:43.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:43 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T08:34:43.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:43 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:34:43.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:43 vm06 ceph-mon[54477]: Deploying daemon osd.0 on vm03 2026-03-10T08:34:44.631 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:44 vm03 ceph-mon[50703]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T08:34:44.631 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:44 vm03 ceph-mon[57160]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T08:34:44.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:44 vm06 ceph-mon[54477]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T08:34:45.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:45 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:34:45.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:45 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:45.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:45 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:45.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:45 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:34:45.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:45 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:45.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:45 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:45.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:45 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:34:45.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:45 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:45.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:45 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:46.012 INFO:teuthology.orchestra.run.vm03.stdout:Created osd(s) 0 on host 'vm03' 2026-03-10T08:34:46.068 DEBUG:teuthology.orchestra.run.vm03:osd.0> sudo journalctl -f -n 0 -u ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@osd.0.service 2026-03-10T08:34:46.069 INFO:tasks.cephadm:Deploying osd.1 on vm03 with /dev/vdd... 2026-03-10T08:34:46.069 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- lvm zap /dev/vdd 2026-03-10T08:34:46.369 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.a/config 2026-03-10T08:34:46.673 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:46 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:46.673 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:46 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:46.673 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:46 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:34:46.673 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:46 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:34:46.673 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:46 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:46.673 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:46 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:34:46.673 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:46 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:46.673 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:46 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:46.673 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:46 vm03 ceph-mon[50703]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T08:34:46.673 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:46 vm03 ceph-mon[50703]: from='osd.0 v1:192.168.123.103:6801/3555379361' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T08:34:46.673 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 10 08:34:46 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-0[61070]: 2026-03-10T08:34:46.497+0000 7f8c507aa740 -1 osd.0 0 log_to_monitors true 2026-03-10T08:34:46.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:46 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:46.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:46 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:46.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:46 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:34:46.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:46 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:34:46.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:46 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:46.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:46 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:34:46.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:46 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:46.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:46 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:46.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:46 vm03 ceph-mon[57160]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T08:34:46.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:46 vm03 ceph-mon[57160]: from='osd.0 v1:192.168.123.103:6801/3555379361' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T08:34:47.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:46 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:47.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:46 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:47.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:46 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:34:47.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:46 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:34:47.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:46 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:47.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:46 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:34:47.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:46 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:47.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:46 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:47.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:46 vm06 ceph-mon[54477]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T08:34:47.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:46 vm06 ceph-mon[54477]: from='osd.0 v1:192.168.123.103:6801/3555379361' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T08:34:47.785 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:34:47.804 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- ceph orch daemon add osd vm03:/dev/vdd 2026-03-10T08:34:47.989 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.a/config 2026-03-10T08:34:48.340 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:48 vm06 ceph-mon[54477]: from='osd.0 v1:192.168.123.103:6801/3555379361' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T08:34:48.340 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:48 vm06 ceph-mon[54477]: osdmap e6: 1 total, 0 up, 1 in 2026-03-10T08:34:48.340 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:48 vm06 ceph-mon[54477]: from='osd.0 v1:192.168.123.103:6801/3555379361' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T08:34:48.340 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:48 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T08:34:48.340 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:48 vm06 ceph-mon[54477]: Detected new or changed devices on vm03 2026-03-10T08:34:48.340 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:48 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:48.340 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:48 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:48.340 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:48 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T08:34:48.340 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:48 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:34:48.340 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:48 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:34:48.340 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:48 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:48.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:48 vm03 ceph-mon[50703]: from='osd.0 v1:192.168.123.103:6801/3555379361' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T08:34:48.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:48 vm03 ceph-mon[50703]: osdmap e6: 1 total, 0 up, 1 in 2026-03-10T08:34:48.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:48 vm03 ceph-mon[50703]: from='osd.0 v1:192.168.123.103:6801/3555379361' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T08:34:48.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:48 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T08:34:48.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:48 vm03 ceph-mon[50703]: Detected new or changed devices on vm03 2026-03-10T08:34:48.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:48 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:48.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:48 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:48.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:48 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T08:34:48.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:48 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:34:48.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:48 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:34:48.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:48 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:48.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:48 vm03 ceph-mon[57160]: from='osd.0 v1:192.168.123.103:6801/3555379361' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T08:34:48.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:48 vm03 ceph-mon[57160]: osdmap e6: 1 total, 0 up, 1 in 2026-03-10T08:34:48.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:48 vm03 ceph-mon[57160]: from='osd.0 v1:192.168.123.103:6801/3555379361' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T08:34:48.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:48 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T08:34:48.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:48 vm03 ceph-mon[57160]: Detected new or changed devices on vm03 2026-03-10T08:34:48.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:48 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:48.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:48 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:48.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:48 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T08:34:48.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:48 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:34:48.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:48 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:34:48.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:48 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:49.232 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:49 vm03 ceph-mon[50703]: from='osd.0 v1:192.168.123.103:6801/3555379361' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T08:34:49.232 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:49 vm03 ceph-mon[50703]: osdmap e7: 1 total, 0 up, 1 in 2026-03-10T08:34:49.232 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:49 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T08:34:49.232 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:49 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T08:34:49.232 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:49 vm03 ceph-mon[50703]: pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T08:34:49.232 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:49 vm03 ceph-mon[50703]: from='client.14241 v1:192.168.123.103:0/2789549524' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:34:49.232 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:49 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T08:34:49.232 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:49 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T08:34:49.232 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:49 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:34:49.232 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:49 vm03 ceph-mon[50703]: from='osd.0 v1:192.168.123.103:6801/3555379361' entity='osd.0' 2026-03-10T08:34:49.232 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:49 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T08:34:49.232 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:49 vm03 ceph-mon[57160]: from='osd.0 v1:192.168.123.103:6801/3555379361' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T08:34:49.233 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:49 vm03 ceph-mon[57160]: osdmap e7: 1 total, 0 up, 1 in 2026-03-10T08:34:49.233 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:49 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T08:34:49.233 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:49 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T08:34:49.233 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:49 vm03 ceph-mon[57160]: pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T08:34:49.233 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:49 vm03 ceph-mon[57160]: from='client.14241 v1:192.168.123.103:0/2789549524' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:34:49.233 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:49 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T08:34:49.233 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:49 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T08:34:49.233 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:49 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:34:49.233 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:49 vm03 ceph-mon[57160]: from='osd.0 v1:192.168.123.103:6801/3555379361' entity='osd.0' 2026-03-10T08:34:49.233 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:49 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T08:34:49.233 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 10 08:34:49 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-0[61070]: 2026-03-10T08:34:48.999+0000 7f8c4cf3e640 -1 osd.0 0 waiting for initial osdmap 2026-03-10T08:34:49.233 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 10 08:34:49 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-0[61070]: 2026-03-10T08:34:49.010+0000 7f8c47d54640 -1 osd.0 7 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T08:34:49.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:49 vm06 ceph-mon[54477]: from='osd.0 v1:192.168.123.103:6801/3555379361' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T08:34:49.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:49 vm06 ceph-mon[54477]: osdmap e7: 1 total, 0 up, 1 in 2026-03-10T08:34:49.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:49 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T08:34:49.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:49 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T08:34:49.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:49 vm06 ceph-mon[54477]: pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T08:34:49.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:49 vm06 ceph-mon[54477]: from='client.14241 v1:192.168.123.103:0/2789549524' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:34:49.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:49 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T08:34:49.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:49 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T08:34:49.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:49 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:34:49.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:49 vm06 ceph-mon[54477]: from='osd.0 v1:192.168.123.103:6801/3555379361' entity='osd.0' 2026-03-10T08:34:49.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:49 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T08:34:50.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:50 vm06 ceph-mon[54477]: purged_snaps scrub starts 2026-03-10T08:34:50.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:50 vm06 ceph-mon[54477]: purged_snaps scrub ok 2026-03-10T08:34:50.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:50 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/1023122563' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "72f3d2ae-068d-49d7-8065-95c621b425f6"}]: dispatch 2026-03-10T08:34:50.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:50 vm06 ceph-mon[54477]: from='client.24148 ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "72f3d2ae-068d-49d7-8065-95c621b425f6"}]: dispatch 2026-03-10T08:34:50.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:50 vm06 ceph-mon[54477]: from='client.24148 ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "72f3d2ae-068d-49d7-8065-95c621b425f6"}]': finished 2026-03-10T08:34:50.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:50 vm06 ceph-mon[54477]: osd.0 v1:192.168.123.103:6801/3555379361 boot 2026-03-10T08:34:50.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:50 vm06 ceph-mon[54477]: osdmap e8: 2 total, 1 up, 2 in 2026-03-10T08:34:50.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:50 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T08:34:50.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:50 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T08:34:50.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:50 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/2381402978' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T08:34:50.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:50 vm03 ceph-mon[57160]: purged_snaps scrub starts 2026-03-10T08:34:50.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:50 vm03 ceph-mon[57160]: purged_snaps scrub ok 2026-03-10T08:34:50.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:50 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/1023122563' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "72f3d2ae-068d-49d7-8065-95c621b425f6"}]: dispatch 2026-03-10T08:34:50.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:50 vm03 ceph-mon[57160]: from='client.24148 ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "72f3d2ae-068d-49d7-8065-95c621b425f6"}]: dispatch 2026-03-10T08:34:50.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:50 vm03 ceph-mon[57160]: from='client.24148 ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "72f3d2ae-068d-49d7-8065-95c621b425f6"}]': finished 2026-03-10T08:34:50.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:50 vm03 ceph-mon[57160]: osd.0 v1:192.168.123.103:6801/3555379361 boot 2026-03-10T08:34:50.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:50 vm03 ceph-mon[57160]: osdmap e8: 2 total, 1 up, 2 in 2026-03-10T08:34:50.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:50 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T08:34:50.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:50 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T08:34:50.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:50 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/2381402978' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T08:34:50.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:50 vm03 ceph-mon[50703]: purged_snaps scrub starts 2026-03-10T08:34:50.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:50 vm03 ceph-mon[50703]: purged_snaps scrub ok 2026-03-10T08:34:50.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:50 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/1023122563' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "72f3d2ae-068d-49d7-8065-95c621b425f6"}]: dispatch 2026-03-10T08:34:50.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:50 vm03 ceph-mon[50703]: from='client.24148 ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "72f3d2ae-068d-49d7-8065-95c621b425f6"}]: dispatch 2026-03-10T08:34:50.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:50 vm03 ceph-mon[50703]: from='client.24148 ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "72f3d2ae-068d-49d7-8065-95c621b425f6"}]': finished 2026-03-10T08:34:50.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:50 vm03 ceph-mon[50703]: osd.0 v1:192.168.123.103:6801/3555379361 boot 2026-03-10T08:34:50.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:50 vm03 ceph-mon[50703]: osdmap e8: 2 total, 1 up, 2 in 2026-03-10T08:34:50.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:50 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T08:34:50.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:50 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T08:34:50.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:50 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/2381402978' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T08:34:51.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:51 vm06 ceph-mon[54477]: osdmap e9: 2 total, 1 up, 2 in 2026-03-10T08:34:51.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:51 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T08:34:51.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:51 vm06 ceph-mon[54477]: pgmap v19: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T08:34:51.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:51 vm03 ceph-mon[57160]: osdmap e9: 2 total, 1 up, 2 in 2026-03-10T08:34:51.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:51 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T08:34:51.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:51 vm03 ceph-mon[57160]: pgmap v19: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T08:34:51.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:51 vm03 ceph-mon[50703]: osdmap e9: 2 total, 1 up, 2 in 2026-03-10T08:34:51.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:51 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T08:34:51.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:51 vm03 ceph-mon[50703]: pgmap v19: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T08:34:52.668 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:52 vm03 ceph-mon[50703]: pgmap v20: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T08:34:52.668 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:52 vm03 ceph-mon[57160]: pgmap v20: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T08:34:52.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:52 vm06 ceph-mon[54477]: pgmap v20: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T08:34:53.562 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:53 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T08:34:53.562 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:53 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:34:53.562 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:53 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T08:34:53.562 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:53 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:34:53.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:53 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T08:34:53.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:53 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:34:54.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:54 vm03 ceph-mon[57160]: Deploying daemon osd.1 on vm03 2026-03-10T08:34:54.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:54 vm03 ceph-mon[57160]: pgmap v21: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T08:34:54.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:54 vm03 ceph-mon[50703]: Deploying daemon osd.1 on vm03 2026-03-10T08:34:54.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:54 vm03 ceph-mon[50703]: pgmap v21: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T08:34:54.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:54 vm06 ceph-mon[54477]: Deploying daemon osd.1 on vm03 2026-03-10T08:34:54.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:54 vm06 ceph-mon[54477]: pgmap v21: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T08:34:56.706 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:56 vm03 ceph-mon[50703]: pgmap v22: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T08:34:56.707 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:56 vm03 ceph-mon[57160]: pgmap v22: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T08:34:56.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:56 vm06 ceph-mon[54477]: pgmap v22: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T08:34:57.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:57 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:34:57.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:57 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:57.802 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:57 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:57.802 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:57 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:34:57.803 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:57 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:57.803 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:57 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:57.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:57 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:34:57.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:57 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:57.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:57 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:59.084 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:58 vm03 ceph-mon[57160]: pgmap v23: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T08:34:59.085 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:58 vm03 ceph-mon[50703]: pgmap v23: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T08:34:59.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:58 vm06 ceph-mon[54477]: pgmap v23: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T08:34:59.537 INFO:teuthology.orchestra.run.vm03.stdout:Created osd(s) 1 on host 'vm03' 2026-03-10T08:34:59.655 DEBUG:teuthology.orchestra.run.vm03:osd.1> sudo journalctl -f -n 0 -u ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@osd.1.service 2026-03-10T08:34:59.656 INFO:tasks.cephadm:Deploying osd.2 on vm03 with /dev/vdc... 2026-03-10T08:34:59.656 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- lvm zap /dev/vdc 2026-03-10T08:34:59.961 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:59 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:59.961 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:59 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:59.961 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:59 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:34:59.961 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:59 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:34:59.961 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:59 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:59.962 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:59 vm03 ceph-mon[50703]: from='osd.1 v1:192.168.123.103:6805/129267279' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T08:34:59.962 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:59 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:34:59.962 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:59 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:59.962 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:34:59 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:59.962 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:59 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:59.962 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:59 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:59.962 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:59 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:34:59.962 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:59 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:34:59.962 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:59 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:59.962 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:59 vm03 ceph-mon[57160]: from='osd.1 v1:192.168.123.103:6805/129267279' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T08:34:59.962 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:59 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:34:59.962 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:59 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:59.962 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:34:59 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:34:59.972 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.a/config 2026-03-10T08:35:00.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:59 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:00.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:59 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:00.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:59 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:00.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:59 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:35:00.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:59 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:00.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:59 vm06 ceph-mon[54477]: from='osd.1 v1:192.168.123.103:6805/129267279' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T08:35:00.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:59 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:35:00.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:59 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:00.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:34:59 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:01.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:00 vm03 ceph-mon[57160]: from='osd.1 v1:192.168.123.103:6805/129267279' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T08:35:01.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:00 vm03 ceph-mon[57160]: osdmap e10: 2 total, 1 up, 2 in 2026-03-10T08:35:01.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:00 vm03 ceph-mon[57160]: from='osd.1 v1:192.168.123.103:6805/129267279' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T08:35:01.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:00 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T08:35:01.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:00 vm03 ceph-mon[57160]: pgmap v25: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T08:35:01.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:00 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:01.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:00 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:01.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:00 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T08:35:01.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:00 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:01.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:00 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:35:01.179 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:00 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:01.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:00 vm03 ceph-mon[50703]: from='osd.1 v1:192.168.123.103:6805/129267279' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T08:35:01.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:00 vm03 ceph-mon[50703]: osdmap e10: 2 total, 1 up, 2 in 2026-03-10T08:35:01.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:00 vm03 ceph-mon[50703]: from='osd.1 v1:192.168.123.103:6805/129267279' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T08:35:01.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:00 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T08:35:01.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:00 vm03 ceph-mon[50703]: pgmap v25: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T08:35:01.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:00 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:01.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:00 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:01.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:00 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T08:35:01.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:00 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:01.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:00 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:35:01.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:00 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:01.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:00 vm06 ceph-mon[54477]: from='osd.1 v1:192.168.123.103:6805/129267279' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T08:35:01.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:00 vm06 ceph-mon[54477]: osdmap e10: 2 total, 1 up, 2 in 2026-03-10T08:35:01.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:00 vm06 ceph-mon[54477]: from='osd.1 v1:192.168.123.103:6805/129267279' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T08:35:01.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:00 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T08:35:01.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:00 vm06 ceph-mon[54477]: pgmap v25: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T08:35:01.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:00 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:01.340 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:00 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:01.340 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:00 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T08:35:01.340 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:00 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:01.340 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:00 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:35:01.340 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:00 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:01.433 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:35:01.451 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- ceph orch daemon add osd vm03:/dev/vdc 2026-03-10T08:35:01.658 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.a/config 2026-03-10T08:35:01.948 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:01 vm03 ceph-mon[57160]: purged_snaps scrub starts 2026-03-10T08:35:01.948 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:01 vm03 ceph-mon[57160]: purged_snaps scrub ok 2026-03-10T08:35:01.948 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:01 vm03 ceph-mon[57160]: Detected new or changed devices on vm03 2026-03-10T08:35:01.948 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:01 vm03 ceph-mon[57160]: from='osd.1 v1:192.168.123.103:6805/129267279' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T08:35:01.948 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:01 vm03 ceph-mon[57160]: osdmap e11: 2 total, 1 up, 2 in 2026-03-10T08:35:01.948 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:01 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T08:35:01.948 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:01 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T08:35:01.948 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:01 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T08:35:01.948 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 10 08:35:01 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-1[65947]: 2026-03-10T08:35:01.931+0000 7fd9bb5e4640 -1 osd.1 0 waiting for initial osdmap 2026-03-10T08:35:01.948 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:01 vm03 ceph-mon[50703]: purged_snaps scrub starts 2026-03-10T08:35:01.948 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:01 vm03 ceph-mon[50703]: purged_snaps scrub ok 2026-03-10T08:35:01.948 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:01 vm03 ceph-mon[50703]: Detected new or changed devices on vm03 2026-03-10T08:35:01.948 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:01 vm03 ceph-mon[50703]: from='osd.1 v1:192.168.123.103:6805/129267279' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T08:35:01.949 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:01 vm03 ceph-mon[50703]: osdmap e11: 2 total, 1 up, 2 in 2026-03-10T08:35:01.949 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:01 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T08:35:01.949 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:01 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T08:35:01.949 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:01 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T08:35:02.215 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 10 08:35:01 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-1[65947]: 2026-03-10T08:35:01.944+0000 7fd9b6c0d640 -1 osd.1 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T08:35:02.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:01 vm06 ceph-mon[54477]: purged_snaps scrub starts 2026-03-10T08:35:02.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:01 vm06 ceph-mon[54477]: purged_snaps scrub ok 2026-03-10T08:35:02.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:01 vm06 ceph-mon[54477]: Detected new or changed devices on vm03 2026-03-10T08:35:02.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:01 vm06 ceph-mon[54477]: from='osd.1 v1:192.168.123.103:6805/129267279' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T08:35:02.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:01 vm06 ceph-mon[54477]: osdmap e11: 2 total, 1 up, 2 in 2026-03-10T08:35:02.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:01 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T08:35:02.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:01 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T08:35:02.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:01 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T08:35:03.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:02 vm03 ceph-mon[50703]: from='osd.1 v1:192.168.123.103:6805/129267279' entity='osd.1' 2026-03-10T08:35:03.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:02 vm03 ceph-mon[50703]: from='client.24155 v1:192.168.123.103:0/1762605397' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:35:03.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:02 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T08:35:03.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:02 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T08:35:03.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:02 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:03.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:02 vm03 ceph-mon[50703]: pgmap v27: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T08:35:03.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:02 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/1803465111' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ad9e1fac-53ca-411f-a676-d5c1ab5d0de6"}]: dispatch 2026-03-10T08:35:03.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:02 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/1803465111' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ad9e1fac-53ca-411f-a676-d5c1ab5d0de6"}]': finished 2026-03-10T08:35:03.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:02 vm03 ceph-mon[50703]: osd.1 v1:192.168.123.103:6805/129267279 boot 2026-03-10T08:35:03.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:02 vm03 ceph-mon[50703]: osdmap e12: 3 total, 2 up, 3 in 2026-03-10T08:35:03.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:02 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T08:35:03.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:02 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T08:35:03.179 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:02 vm03 ceph-mon[57160]: from='osd.1 v1:192.168.123.103:6805/129267279' entity='osd.1' 2026-03-10T08:35:03.179 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:02 vm03 ceph-mon[57160]: from='client.24155 v1:192.168.123.103:0/1762605397' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:35:03.179 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:02 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T08:35:03.179 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:02 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T08:35:03.179 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:02 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:03.179 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:02 vm03 ceph-mon[57160]: pgmap v27: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T08:35:03.179 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:02 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/1803465111' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ad9e1fac-53ca-411f-a676-d5c1ab5d0de6"}]: dispatch 2026-03-10T08:35:03.179 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:02 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/1803465111' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ad9e1fac-53ca-411f-a676-d5c1ab5d0de6"}]': finished 2026-03-10T08:35:03.179 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:02 vm03 ceph-mon[57160]: osd.1 v1:192.168.123.103:6805/129267279 boot 2026-03-10T08:35:03.179 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:02 vm03 ceph-mon[57160]: osdmap e12: 3 total, 2 up, 3 in 2026-03-10T08:35:03.179 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:02 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T08:35:03.179 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:02 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T08:35:03.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:02 vm06 ceph-mon[54477]: from='osd.1 v1:192.168.123.103:6805/129267279' entity='osd.1' 2026-03-10T08:35:03.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:02 vm06 ceph-mon[54477]: from='client.24155 v1:192.168.123.103:0/1762605397' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:35:03.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:02 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T08:35:03.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:02 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T08:35:03.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:02 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:03.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:02 vm06 ceph-mon[54477]: pgmap v27: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T08:35:03.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:02 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/1803465111' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ad9e1fac-53ca-411f-a676-d5c1ab5d0de6"}]: dispatch 2026-03-10T08:35:03.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:02 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/1803465111' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ad9e1fac-53ca-411f-a676-d5c1ab5d0de6"}]': finished 2026-03-10T08:35:03.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:02 vm06 ceph-mon[54477]: osd.1 v1:192.168.123.103:6805/129267279 boot 2026-03-10T08:35:03.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:02 vm06 ceph-mon[54477]: osdmap e12: 3 total, 2 up, 3 in 2026-03-10T08:35:03.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:02 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T08:35:03.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:02 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T08:35:04.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:03 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/2409243450' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T08:35:04.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:03 vm06 ceph-mon[54477]: osdmap e13: 3 total, 2 up, 3 in 2026-03-10T08:35:04.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:03 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T08:35:04.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:03 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/2409243450' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T08:35:04.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:03 vm03 ceph-mon[57160]: osdmap e13: 3 total, 2 up, 3 in 2026-03-10T08:35:04.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:03 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T08:35:04.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:03 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/2409243450' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T08:35:04.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:03 vm03 ceph-mon[50703]: osdmap e13: 3 total, 2 up, 3 in 2026-03-10T08:35:04.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:03 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T08:35:05.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:04 vm06 ceph-mon[54477]: pgmap v30: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-10T08:35:05.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:04 vm03 ceph-mon[50703]: pgmap v30: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-10T08:35:05.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:04 vm03 ceph-mon[57160]: pgmap v30: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-10T08:35:06.347 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:06 vm03 ceph-mon[50703]: pgmap v31: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-10T08:35:06.350 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:06 vm03 ceph-mon[57160]: pgmap v31: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-10T08:35:06.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:06 vm06 ceph-mon[54477]: pgmap v31: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-10T08:35:07.350 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:07 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T08:35:07.350 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:07 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:07.350 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:07 vm03 ceph-mon[50703]: Deploying daemon osd.2 on vm03 2026-03-10T08:35:07.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:07 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T08:35:07.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:07 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:07.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:07 vm03 ceph-mon[57160]: Deploying daemon osd.2 on vm03 2026-03-10T08:35:07.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:07 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T08:35:07.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:07 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:07.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:07 vm06 ceph-mon[54477]: Deploying daemon osd.2 on vm03 2026-03-10T08:35:08.667 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:08 vm03 ceph-mon[57160]: pgmap v32: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T08:35:08.667 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:08 vm03 ceph-mon[50703]: pgmap v32: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T08:35:08.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:08 vm06 ceph-mon[54477]: pgmap v32: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T08:35:09.360 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:09 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:35:09.360 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:09 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:09.360 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:09 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:09.621 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:09 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:35:09.621 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:09 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:09.621 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:09 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:09.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:09 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:35:09.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:09 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:09.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:09 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:10.548 INFO:teuthology.orchestra.run.vm03.stdout:Created osd(s) 2 on host 'vm03' 2026-03-10T08:35:10.638 DEBUG:teuthology.orchestra.run.vm03:osd.2> sudo journalctl -f -n 0 -u ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@osd.2.service 2026-03-10T08:35:10.640 INFO:tasks.cephadm:Deploying osd.3 on vm03 with /dev/vdb... 2026-03-10T08:35:10.640 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- lvm zap /dev/vdb 2026-03-10T08:35:10.956 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.a/config 2026-03-10T08:35:11.402 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:11 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:11.402 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:11 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:11.402 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:11 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:11.402 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:11 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:35:11.402 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:11 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:11.402 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:11 vm03 ceph-mon[57160]: pgmap v33: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T08:35:11.402 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:11 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:35:11.402 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:11 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:11.402 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:11 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:11.402 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:11 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:11.402 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:11 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:11.402 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:11 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:11.402 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:11 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:35:11.402 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:11 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:11.402 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:11 vm03 ceph-mon[50703]: pgmap v33: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T08:35:11.402 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:11 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:35:11.402 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:11 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:11.402 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:11 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:11.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:11 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:11.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:11 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:11.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:11 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:11.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:11 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:35:11.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:11 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:11.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:11 vm06 ceph-mon[54477]: pgmap v33: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T08:35:11.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:11 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:35:11.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:11 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:11.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:11 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:11.678 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 10 08:35:11 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-2[71200]: 2026-03-10T08:35:11.399+0000 7f21948eb740 -1 osd.2 0 log_to_monitors true 2026-03-10T08:35:12.270 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:12 vm03 ceph-mon[57160]: from='osd.2 v1:192.168.123.103:6809/1710778110' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T08:35:12.271 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:12 vm03 ceph-mon[57160]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T08:35:12.271 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:12 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:12.271 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:12 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:12.271 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:12 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T08:35:12.271 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:12 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:12.271 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:12 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:35:12.271 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:12 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:12.271 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:12 vm03 ceph-mon[50703]: from='osd.2 v1:192.168.123.103:6809/1710778110' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T08:35:12.271 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:12 vm03 ceph-mon[50703]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T08:35:12.271 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:12 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:12.271 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:12 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:12.271 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:12 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T08:35:12.271 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:12 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:12.271 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:12 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:35:12.271 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:12 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:12.396 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:35:12.415 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- ceph orch daemon add osd vm03:/dev/vdb 2026-03-10T08:35:12.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:12 vm06 ceph-mon[54477]: from='osd.2 v1:192.168.123.103:6809/1710778110' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T08:35:12.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:12 vm06 ceph-mon[54477]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T08:35:12.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:12 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:12.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:12 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:12.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:12 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T08:35:12.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:12 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:12.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:12 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:35:12.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:12 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:12.610 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.a/config 2026-03-10T08:35:13.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:13 vm03 ceph-mon[50703]: Detected new or changed devices on vm03 2026-03-10T08:35:13.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:13 vm03 ceph-mon[50703]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T08:35:13.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:13 vm03 ceph-mon[50703]: from='osd.2 v1:192.168.123.103:6809/1710778110' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T08:35:13.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:13 vm03 ceph-mon[50703]: osdmap e14: 3 total, 2 up, 3 in 2026-03-10T08:35:13.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:13 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T08:35:13.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:13 vm03 ceph-mon[50703]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T08:35:13.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:13 vm03 ceph-mon[50703]: pgmap v35: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T08:35:13.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:13 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T08:35:13.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:13 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T08:35:13.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:13 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:13.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:13 vm03 ceph-mon[57160]: Detected new or changed devices on vm03 2026-03-10T08:35:13.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:13 vm03 ceph-mon[57160]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T08:35:13.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:13 vm03 ceph-mon[57160]: from='osd.2 v1:192.168.123.103:6809/1710778110' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T08:35:13.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:13 vm03 ceph-mon[57160]: osdmap e14: 3 total, 2 up, 3 in 2026-03-10T08:35:13.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:13 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T08:35:13.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:13 vm03 ceph-mon[57160]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T08:35:13.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:13 vm03 ceph-mon[57160]: pgmap v35: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T08:35:13.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:13 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T08:35:13.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:13 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T08:35:13.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:13 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:13.429 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 10 08:35:13 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-2[71200]: 2026-03-10T08:35:13.230+0000 7f219107f640 -1 osd.2 0 waiting for initial osdmap 2026-03-10T08:35:13.429 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 10 08:35:13 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-2[71200]: 2026-03-10T08:35:13.253+0000 7f218c696640 -1 osd.2 15 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T08:35:13.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:13 vm06 ceph-mon[54477]: Detected new or changed devices on vm03 2026-03-10T08:35:13.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:13 vm06 ceph-mon[54477]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T08:35:13.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:13 vm06 ceph-mon[54477]: from='osd.2 v1:192.168.123.103:6809/1710778110' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T08:35:13.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:13 vm06 ceph-mon[54477]: osdmap e14: 3 total, 2 up, 3 in 2026-03-10T08:35:13.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:13 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T08:35:13.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:13 vm06 ceph-mon[54477]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T08:35:13.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:13 vm06 ceph-mon[54477]: pgmap v35: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T08:35:13.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:13 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T08:35:13.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:13 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T08:35:13.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:13 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:14.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:14 vm06 ceph-mon[54477]: from='client.14289 v1:192.168.123.103:0/986204831' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:35:14.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:14 vm06 ceph-mon[54477]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T08:35:14.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:14 vm06 ceph-mon[54477]: osdmap e15: 3 total, 2 up, 3 in 2026-03-10T08:35:14.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:14 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T08:35:14.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:14 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/3934215254' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "76603192-68c1-4c39-a4c0-aa87d5f6b1cd"}]: dispatch 2026-03-10T08:35:14.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:14 vm06 ceph-mon[54477]: from='client.24185 ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "76603192-68c1-4c39-a4c0-aa87d5f6b1cd"}]: dispatch 2026-03-10T08:35:14.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:14 vm06 ceph-mon[54477]: from='client.24185 ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "76603192-68c1-4c39-a4c0-aa87d5f6b1cd"}]': finished 2026-03-10T08:35:14.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:14 vm06 ceph-mon[54477]: osd.2 v1:192.168.123.103:6809/1710778110 boot 2026-03-10T08:35:14.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:14 vm06 ceph-mon[54477]: osdmap e16: 4 total, 3 up, 4 in 2026-03-10T08:35:14.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:14 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T08:35:14.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:14 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T08:35:14.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:14 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/2211200163' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T08:35:14.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:14 vm03 ceph-mon[50703]: from='client.14289 v1:192.168.123.103:0/986204831' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:35:14.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:14 vm03 ceph-mon[50703]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T08:35:14.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:14 vm03 ceph-mon[50703]: osdmap e15: 3 total, 2 up, 3 in 2026-03-10T08:35:14.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:14 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T08:35:14.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:14 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/3934215254' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "76603192-68c1-4c39-a4c0-aa87d5f6b1cd"}]: dispatch 2026-03-10T08:35:14.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:14 vm03 ceph-mon[50703]: from='client.24185 ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "76603192-68c1-4c39-a4c0-aa87d5f6b1cd"}]: dispatch 2026-03-10T08:35:14.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:14 vm03 ceph-mon[50703]: from='client.24185 ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "76603192-68c1-4c39-a4c0-aa87d5f6b1cd"}]': finished 2026-03-10T08:35:14.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:14 vm03 ceph-mon[50703]: osd.2 v1:192.168.123.103:6809/1710778110 boot 2026-03-10T08:35:14.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:14 vm03 ceph-mon[50703]: osdmap e16: 4 total, 3 up, 4 in 2026-03-10T08:35:14.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:14 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T08:35:14.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:14 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T08:35:14.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:14 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/2211200163' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T08:35:14.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:14 vm03 ceph-mon[57160]: from='client.14289 v1:192.168.123.103:0/986204831' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:35:14.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:14 vm03 ceph-mon[57160]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T08:35:14.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:14 vm03 ceph-mon[57160]: osdmap e15: 3 total, 2 up, 3 in 2026-03-10T08:35:14.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:14 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T08:35:14.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:14 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/3934215254' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "76603192-68c1-4c39-a4c0-aa87d5f6b1cd"}]: dispatch 2026-03-10T08:35:14.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:14 vm03 ceph-mon[57160]: from='client.24185 ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "76603192-68c1-4c39-a4c0-aa87d5f6b1cd"}]: dispatch 2026-03-10T08:35:14.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:14 vm03 ceph-mon[57160]: from='client.24185 ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "76603192-68c1-4c39-a4c0-aa87d5f6b1cd"}]': finished 2026-03-10T08:35:14.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:14 vm03 ceph-mon[57160]: osd.2 v1:192.168.123.103:6809/1710778110 boot 2026-03-10T08:35:14.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:14 vm03 ceph-mon[57160]: osdmap e16: 4 total, 3 up, 4 in 2026-03-10T08:35:14.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:14 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T08:35:14.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:14 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T08:35:14.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:14 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/2211200163' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T08:35:15.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:15 vm06 ceph-mon[54477]: purged_snaps scrub starts 2026-03-10T08:35:15.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:15 vm06 ceph-mon[54477]: purged_snaps scrub ok 2026-03-10T08:35:15.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:15 vm06 ceph-mon[54477]: pgmap v38: 0 pgs: ; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T08:35:15.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:15 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-10T08:35:15.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:15 vm03 ceph-mon[57160]: purged_snaps scrub starts 2026-03-10T08:35:15.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:15 vm03 ceph-mon[57160]: purged_snaps scrub ok 2026-03-10T08:35:15.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:15 vm03 ceph-mon[57160]: pgmap v38: 0 pgs: ; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T08:35:15.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:15 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-10T08:35:15.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:15 vm03 ceph-mon[50703]: purged_snaps scrub starts 2026-03-10T08:35:15.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:15 vm03 ceph-mon[50703]: purged_snaps scrub ok 2026-03-10T08:35:15.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:15 vm03 ceph-mon[50703]: pgmap v38: 0 pgs: ; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T08:35:15.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:15 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-10T08:35:16.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:16 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-10T08:35:16.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:16 vm06 ceph-mon[54477]: osdmap e17: 4 total, 3 up, 4 in 2026-03-10T08:35:16.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:16 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T08:35:16.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:16 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T08:35:16.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:16 vm06 sudo[56933]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda 2026-03-10T08:35:16.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:16 vm06 sudo[56933]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-10T08:35:16.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:16 vm06 sudo[56933]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-10T08:35:16.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:16 vm06 sudo[56933]: pam_unix(sudo:session): session closed for user root 2026-03-10T08:35:16.669 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:16 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-10T08:35:16.669 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:16 vm03 ceph-mon[50703]: osdmap e17: 4 total, 3 up, 4 in 2026-03-10T08:35:16.669 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:16 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T08:35:16.669 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:16 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T08:35:16.669 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:16 vm03 sudo[75605]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda 2026-03-10T08:35:16.669 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:16 vm03 sudo[75605]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-10T08:35:16.669 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:16 vm03 sudo[75605]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-10T08:35:16.669 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:16 vm03 sudo[75605]: pam_unix(sudo:session): session closed for user root 2026-03-10T08:35:16.670 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:16 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-10T08:35:16.670 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:16 vm03 ceph-mon[57160]: osdmap e17: 4 total, 3 up, 4 in 2026-03-10T08:35:16.670 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:16 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T08:35:16.670 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:16 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T08:35:16.670 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:16 vm03 sudo[75610]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda 2026-03-10T08:35:16.670 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:16 vm03 sudo[75610]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-10T08:35:16.670 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:16 vm03 sudo[75610]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-10T08:35:16.670 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:16 vm03 sudo[75610]: pam_unix(sudo:session): session closed for user root 2026-03-10T08:35:16.670 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 10 08:35:16 vm03 sudo[75593]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vde 2026-03-10T08:35:16.670 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 10 08:35:16 vm03 sudo[75593]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-10T08:35:16.670 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 10 08:35:16 vm03 sudo[75593]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-10T08:35:16.670 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 10 08:35:16 vm03 sudo[75593]: pam_unix(sudo:session): session closed for user root 2026-03-10T08:35:16.670 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 10 08:35:16 vm03 sudo[75601]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vdc 2026-03-10T08:35:16.670 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 10 08:35:16 vm03 sudo[75601]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-10T08:35:16.670 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 10 08:35:16 vm03 sudo[75601]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-10T08:35:16.670 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 10 08:35:16 vm03 sudo[75601]: pam_unix(sudo:session): session closed for user root 2026-03-10T08:35:16.670 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 10 08:35:16 vm03 sudo[75597]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vdd 2026-03-10T08:35:16.670 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 10 08:35:16 vm03 sudo[75597]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-10T08:35:16.671 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 10 08:35:16 vm03 sudo[75597]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-10T08:35:16.671 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 10 08:35:16 vm03 sudo[75597]: pam_unix(sudo:session): session closed for user root 2026-03-10T08:35:17.288 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:17 vm03 ceph-mon[57160]: pgmap v40: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T08:35:17.288 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:17 vm03 ceph-mon[57160]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:35:17.288 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:17 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-10T08:35:17.288 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:17 vm03 ceph-mon[57160]: osdmap e18: 4 total, 3 up, 4 in 2026-03-10T08:35:17.288 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:17 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T08:35:17.288 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:17 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T08:35:17.288 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:17 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T08:35:17.288 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:17 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T08:35:17.288 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:17 vm03 ceph-mon[57160]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T08:35:17.289 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:17 vm03 ceph-mon[57160]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T08:35:17.289 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:17 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T08:35:17.289 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:17 vm03 ceph-mon[57160]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T08:35:17.289 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:17 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T08:35:17.289 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:17 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T08:35:17.289 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:17 vm03 ceph-mon[57160]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T08:35:17.289 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:17 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T08:35:17.289 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:17 vm03 ceph-mon[57160]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T08:35:17.289 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:17 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T08:35:17.289 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:17 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T08:35:17.289 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:17 vm03 ceph-mon[57160]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T08:35:17.289 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:17 vm03 ceph-mon[57160]: osdmap e19: 4 total, 3 up, 4 in 2026-03-10T08:35:17.289 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:17 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T08:35:17.290 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:17 vm03 ceph-mon[50703]: pgmap v40: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T08:35:17.290 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:17 vm03 ceph-mon[50703]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:35:17.290 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:17 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-10T08:35:17.290 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:17 vm03 ceph-mon[50703]: osdmap e18: 4 total, 3 up, 4 in 2026-03-10T08:35:17.290 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:17 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T08:35:17.290 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:17 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T08:35:17.290 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:17 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T08:35:17.290 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:17 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T08:35:17.290 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:17 vm03 ceph-mon[50703]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T08:35:17.290 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:17 vm03 ceph-mon[50703]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T08:35:17.290 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:17 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T08:35:17.290 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:17 vm03 ceph-mon[50703]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T08:35:17.290 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:17 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T08:35:17.291 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:17 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T08:35:17.291 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:17 vm03 ceph-mon[50703]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T08:35:17.291 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:17 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T08:35:17.291 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:17 vm03 ceph-mon[50703]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T08:35:17.291 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:17 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T08:35:17.291 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:17 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T08:35:17.291 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:17 vm03 ceph-mon[50703]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T08:35:17.291 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:17 vm03 ceph-mon[50703]: osdmap e19: 4 total, 3 up, 4 in 2026-03-10T08:35:17.291 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:17 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T08:35:17.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:17 vm06 ceph-mon[54477]: pgmap v40: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T08:35:17.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:17 vm06 ceph-mon[54477]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:35:17.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:17 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-10T08:35:17.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:17 vm06 ceph-mon[54477]: osdmap e18: 4 total, 3 up, 4 in 2026-03-10T08:35:17.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:17 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T08:35:17.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:17 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T08:35:17.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:17 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T08:35:17.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:17 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T08:35:17.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:17 vm06 ceph-mon[54477]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T08:35:17.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:17 vm06 ceph-mon[54477]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T08:35:17.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:17 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T08:35:17.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:17 vm06 ceph-mon[54477]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T08:35:17.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:17 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T08:35:17.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:17 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T08:35:17.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:17 vm06 ceph-mon[54477]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T08:35:17.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:17 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T08:35:17.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:17 vm06 ceph-mon[54477]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T08:35:17.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:17 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T08:35:17.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:17 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T08:35:17.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:17 vm06 ceph-mon[54477]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T08:35:17.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:17 vm06 ceph-mon[54477]: osdmap e19: 4 total, 3 up, 4 in 2026-03-10T08:35:17.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:17 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T08:35:18.526 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:18 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T08:35:18.526 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:18 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:18.526 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:18 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T08:35:18.526 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:18 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:18.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:18 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T08:35:18.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:18 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:19.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:19 vm06 ceph-mon[54477]: Deploying daemon osd.3 on vm03 2026-03-10T08:35:19.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:19 vm06 ceph-mon[54477]: pgmap v43: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-10T08:35:19.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:19 vm06 ceph-mon[54477]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled) 2026-03-10T08:35:19.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:19 vm06 ceph-mon[54477]: Cluster is now healthy 2026-03-10T08:35:19.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:19 vm06 ceph-mon[54477]: mgrmap e15: y(active, since 68s), standbys: x 2026-03-10T08:35:19.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:19 vm03 ceph-mon[50703]: Deploying daemon osd.3 on vm03 2026-03-10T08:35:19.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:19 vm03 ceph-mon[50703]: pgmap v43: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-10T08:35:19.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:19 vm03 ceph-mon[50703]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled) 2026-03-10T08:35:19.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:19 vm03 ceph-mon[50703]: Cluster is now healthy 2026-03-10T08:35:19.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:19 vm03 ceph-mon[50703]: mgrmap e15: y(active, since 68s), standbys: x 2026-03-10T08:35:19.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:19 vm03 ceph-mon[57160]: Deploying daemon osd.3 on vm03 2026-03-10T08:35:19.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:19 vm03 ceph-mon[57160]: pgmap v43: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-10T08:35:19.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:19 vm03 ceph-mon[57160]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled) 2026-03-10T08:35:19.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:19 vm03 ceph-mon[57160]: Cluster is now healthy 2026-03-10T08:35:19.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:19 vm03 ceph-mon[57160]: mgrmap e15: y(active, since 68s), standbys: x 2026-03-10T08:35:20.558 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:20 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:35:20.558 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:20 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:20.558 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:20 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:20.558 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:20 vm03 ceph-mon[50703]: pgmap v44: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T08:35:20.558 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:20 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:35:20.558 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:20 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:20.558 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:20 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:20.559 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:20 vm03 ceph-mon[57160]: pgmap v44: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T08:35:20.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:20 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:35:20.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:20 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:20.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:20 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:20.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:20 vm06 ceph-mon[54477]: pgmap v44: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T08:35:21.337 INFO:teuthology.orchestra.run.vm03.stdout:Created osd(s) 3 on host 'vm03' 2026-03-10T08:35:21.423 DEBUG:teuthology.orchestra.run.vm03:osd.3> sudo journalctl -f -n 0 -u ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@osd.3.service 2026-03-10T08:35:21.425 INFO:tasks.cephadm:Deploying osd.4 on vm06 with /dev/vde... 2026-03-10T08:35:21.425 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- lvm zap /dev/vde 2026-03-10T08:35:21.602 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.b/config 2026-03-10T08:35:22.031 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:21 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:22.031 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:21 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:22.031 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:21 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:22.031 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:21 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:35:22.031 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:21 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:22.031 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:21 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:22.031 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:21 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:22.031 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:21 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:35:22.031 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:21 vm06 ceph-mon[54477]: from='osd.3 v1:192.168.123.103:6813/2974342634' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T08:35:22.031 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:21 vm06 ceph-mon[54477]: from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T08:35:22.242 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:21 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:22.242 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:21 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:22.242 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:21 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:22.242 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:21 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:35:22.242 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:21 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:22.242 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:21 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:22.242 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:21 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:22.242 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:21 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:35:22.242 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:21 vm03 ceph-mon[50703]: from='osd.3 v1:192.168.123.103:6813/2974342634' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T08:35:22.242 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:21 vm03 ceph-mon[50703]: from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T08:35:22.242 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:21 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:22.243 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:21 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:22.243 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:21 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:22.243 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:21 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:35:22.243 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:21 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:22.243 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:21 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:22.243 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:21 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:22.243 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:21 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:35:22.243 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:21 vm03 ceph-mon[57160]: from='osd.3 v1:192.168.123.103:6813/2974342634' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T08:35:22.243 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:21 vm03 ceph-mon[57160]: from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T08:35:22.435 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:35:22.462 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- ceph orch daemon add osd vm06:/dev/vde 2026-03-10T08:35:22.654 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.b/config 2026-03-10T08:35:22.953 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:22 vm06 ceph-mon[54477]: pgmap v45: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T08:35:22.954 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:22 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:22.954 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:22 vm06 ceph-mon[54477]: from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-10T08:35:22.954 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:22 vm06 ceph-mon[54477]: osdmap e20: 4 total, 3 up, 4 in 2026-03-10T08:35:22.954 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:22 vm06 ceph-mon[54477]: from='osd.3 v1:192.168.123.103:6813/2974342634' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T08:35:22.954 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:22 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T08:35:22.954 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:22 vm06 ceph-mon[54477]: from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T08:35:22.954 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:22 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:22.954 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:22 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T08:35:22.954 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:22 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:22.954 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:22 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:35:22.954 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:22 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:22.954 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:22 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T08:35:22.954 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:22 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T08:35:22.954 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:22 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:23.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:22 vm03 ceph-mon[50703]: pgmap v45: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T08:35:23.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:22 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:23.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:22 vm03 ceph-mon[50703]: from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-10T08:35:23.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:22 vm03 ceph-mon[50703]: osdmap e20: 4 total, 3 up, 4 in 2026-03-10T08:35:23.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:22 vm03 ceph-mon[50703]: from='osd.3 v1:192.168.123.103:6813/2974342634' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T08:35:23.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:22 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T08:35:23.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:22 vm03 ceph-mon[50703]: from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T08:35:23.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:22 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:23.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:22 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T08:35:23.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:22 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:23.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:22 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:35:23.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:22 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:23.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:22 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T08:35:23.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:22 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T08:35:23.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:22 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:23.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:22 vm03 ceph-mon[57160]: pgmap v45: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T08:35:23.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:22 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:23.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:22 vm03 ceph-mon[57160]: from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-10T08:35:23.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:22 vm03 ceph-mon[57160]: osdmap e20: 4 total, 3 up, 4 in 2026-03-10T08:35:23.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:22 vm03 ceph-mon[57160]: from='osd.3 v1:192.168.123.103:6813/2974342634' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T08:35:23.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:22 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T08:35:23.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:22 vm03 ceph-mon[57160]: from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T08:35:23.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:22 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:23.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:22 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T08:35:23.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:22 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:23.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:22 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:35:23.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:22 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:23.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:22 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T08:35:23.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:22 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T08:35:23.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:22 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:24.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:23 vm06 ceph-mon[54477]: Detected new or changed devices on vm03 2026-03-10T08:35:24.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:23 vm06 ceph-mon[54477]: from='client.24223 v1:192.168.123.106:0/17645828' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm06:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:35:24.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:23 vm06 ceph-mon[54477]: from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T08:35:24.090 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:23 vm06 ceph-mon[54477]: osdmap e21: 4 total, 3 up, 4 in 2026-03-10T08:35:24.090 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:23 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T08:35:24.090 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:23 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T08:35:24.090 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:23 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.106:0/634270239' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a828e898-4565-4b34-8d45-f78ab73d10e4"}]: dispatch 2026-03-10T08:35:24.090 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:23 vm06 ceph-mon[54477]: from='client.24229 ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a828e898-4565-4b34-8d45-f78ab73d10e4"}]: dispatch 2026-03-10T08:35:24.090 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:23 vm06 ceph-mon[54477]: from='client.24229 ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "a828e898-4565-4b34-8d45-f78ab73d10e4"}]': finished 2026-03-10T08:35:24.090 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:23 vm06 ceph-mon[54477]: osdmap e22: 5 total, 3 up, 5 in 2026-03-10T08:35:24.090 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:23 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T08:35:24.090 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:23 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T08:35:24.343 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:23 vm03 ceph-mon[57160]: Detected new or changed devices on vm03 2026-03-10T08:35:24.343 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:23 vm03 ceph-mon[57160]: from='client.24223 v1:192.168.123.106:0/17645828' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm06:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:35:24.343 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:23 vm03 ceph-mon[57160]: from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T08:35:24.343 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:23 vm03 ceph-mon[57160]: osdmap e21: 4 total, 3 up, 4 in 2026-03-10T08:35:24.343 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:23 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T08:35:24.343 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:23 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T08:35:24.343 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:23 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.106:0/634270239' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a828e898-4565-4b34-8d45-f78ab73d10e4"}]: dispatch 2026-03-10T08:35:24.343 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:23 vm03 ceph-mon[57160]: from='client.24229 ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a828e898-4565-4b34-8d45-f78ab73d10e4"}]: dispatch 2026-03-10T08:35:24.343 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:23 vm03 ceph-mon[57160]: from='client.24229 ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "a828e898-4565-4b34-8d45-f78ab73d10e4"}]': finished 2026-03-10T08:35:24.343 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:23 vm03 ceph-mon[57160]: osdmap e22: 5 total, 3 up, 5 in 2026-03-10T08:35:24.343 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:23 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T08:35:24.343 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:23 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T08:35:24.343 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:23 vm03 ceph-mon[50703]: Detected new or changed devices on vm03 2026-03-10T08:35:24.343 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:23 vm03 ceph-mon[50703]: from='client.24223 v1:192.168.123.106:0/17645828' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm06:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:35:24.343 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:23 vm03 ceph-mon[50703]: from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T08:35:24.343 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:23 vm03 ceph-mon[50703]: osdmap e21: 4 total, 3 up, 4 in 2026-03-10T08:35:24.343 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:23 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T08:35:24.343 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:23 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T08:35:24.343 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:23 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.106:0/634270239' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a828e898-4565-4b34-8d45-f78ab73d10e4"}]: dispatch 2026-03-10T08:35:24.343 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:23 vm03 ceph-mon[50703]: from='client.24229 ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a828e898-4565-4b34-8d45-f78ab73d10e4"}]: dispatch 2026-03-10T08:35:24.343 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:23 vm03 ceph-mon[50703]: from='client.24229 ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "a828e898-4565-4b34-8d45-f78ab73d10e4"}]': finished 2026-03-10T08:35:24.343 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:23 vm03 ceph-mon[50703]: osdmap e22: 5 total, 3 up, 5 in 2026-03-10T08:35:24.343 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:23 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T08:35:24.343 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:23 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T08:35:24.678 INFO:journalctl@ceph.osd.3.vm03.stdout:Mar 10 08:35:24 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-3[76397]: 2026-03-10T08:35:24.339+0000 7f8dbc4df640 -1 osd.3 0 waiting for initial osdmap 2026-03-10T08:35:24.678 INFO:journalctl@ceph.osd.3.vm03.stdout:Mar 10 08:35:24 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-3[76397]: 2026-03-10T08:35:24.347+0000 7f8db7b08640 -1 osd.3 22 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T08:35:25.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:24 vm06 ceph-mon[54477]: purged_snaps scrub starts 2026-03-10T08:35:25.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:24 vm06 ceph-mon[54477]: purged_snaps scrub ok 2026-03-10T08:35:25.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:24 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.106:0/627386052' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T08:35:25.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:24 vm06 ceph-mon[54477]: pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T08:35:25.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:24 vm06 ceph-mon[54477]: from='osd.3 ' entity='osd.3' 2026-03-10T08:35:25.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:24 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T08:35:25.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:24 vm03 ceph-mon[57160]: purged_snaps scrub starts 2026-03-10T08:35:25.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:24 vm03 ceph-mon[57160]: purged_snaps scrub ok 2026-03-10T08:35:25.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:24 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.106:0/627386052' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T08:35:25.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:24 vm03 ceph-mon[57160]: pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T08:35:25.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:24 vm03 ceph-mon[57160]: from='osd.3 ' entity='osd.3' 2026-03-10T08:35:25.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:24 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T08:35:25.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:24 vm03 ceph-mon[50703]: purged_snaps scrub starts 2026-03-10T08:35:25.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:24 vm03 ceph-mon[50703]: purged_snaps scrub ok 2026-03-10T08:35:25.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:24 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.106:0/627386052' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T08:35:25.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:24 vm03 ceph-mon[50703]: pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T08:35:25.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:24 vm03 ceph-mon[50703]: from='osd.3 ' entity='osd.3' 2026-03-10T08:35:25.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:24 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T08:35:26.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:26 vm03 ceph-mon[57160]: osd.3 v1:192.168.123.103:6813/2974342634 boot 2026-03-10T08:35:26.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:26 vm03 ceph-mon[57160]: osdmap e23: 5 total, 4 up, 5 in 2026-03-10T08:35:26.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:26 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T08:35:26.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:26 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T08:35:26.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:26 vm03 ceph-mon[57160]: pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T08:35:26.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:26 vm03 ceph-mon[50703]: osd.3 v1:192.168.123.103:6813/2974342634 boot 2026-03-10T08:35:26.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:26 vm03 ceph-mon[50703]: osdmap e23: 5 total, 4 up, 5 in 2026-03-10T08:35:26.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:26 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T08:35:26.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:26 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T08:35:26.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:26 vm03 ceph-mon[50703]: pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T08:35:26.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:26 vm06 ceph-mon[54477]: osd.3 v1:192.168.123.103:6813/2974342634 boot 2026-03-10T08:35:26.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:26 vm06 ceph-mon[54477]: osdmap e23: 5 total, 4 up, 5 in 2026-03-10T08:35:26.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:26 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T08:35:26.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:26 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T08:35:26.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:26 vm06 ceph-mon[54477]: pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T08:35:27.464 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:27 vm06 ceph-mon[54477]: osdmap e24: 5 total, 4 up, 5 in 2026-03-10T08:35:27.464 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:27 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T08:35:27.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:27 vm03 ceph-mon[57160]: osdmap e24: 5 total, 4 up, 5 in 2026-03-10T08:35:27.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:27 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T08:35:27.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:27 vm03 ceph-mon[50703]: osdmap e24: 5 total, 4 up, 5 in 2026-03-10T08:35:27.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:27 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T08:35:28.655 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:28 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T08:35:28.656 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:28 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:28.656 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:28 vm06 ceph-mon[54477]: Deploying daemon osd.4 on vm06 2026-03-10T08:35:28.656 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:28 vm06 ceph-mon[54477]: pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T08:35:28.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:28 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T08:35:28.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:28 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:28.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:28 vm03 ceph-mon[57160]: Deploying daemon osd.4 on vm06 2026-03-10T08:35:28.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:28 vm03 ceph-mon[57160]: pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T08:35:28.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:28 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T08:35:28.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:28 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:28.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:28 vm03 ceph-mon[50703]: Deploying daemon osd.4 on vm06 2026-03-10T08:35:28.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:28 vm03 ceph-mon[50703]: pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T08:35:30.132 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:30 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:35:30.132 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:30 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:30.132 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:30 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:30.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:30 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:35:30.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:30 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:30.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:30 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:30.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:30 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:35:30.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:30 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:30.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:30 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:31.193 INFO:teuthology.orchestra.run.vm06.stdout:Created osd(s) 4 on host 'vm06' 2026-03-10T08:35:31.244 DEBUG:teuthology.orchestra.run.vm06:osd.4> sudo journalctl -f -n 0 -u ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@osd.4.service 2026-03-10T08:35:31.245 INFO:tasks.cephadm:Deploying osd.5 on vm06 with /dev/vdd... 2026-03-10T08:35:31.245 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- lvm zap /dev/vdd 2026-03-10T08:35:31.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:31 vm03 ceph-mon[57160]: pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T08:35:31.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:31 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:31.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:31 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:31.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:31 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:31.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:31 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:35:31.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:31 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:31.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:31 vm03 ceph-mon[50703]: pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T08:35:31.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:31 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:31.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:31 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:31.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:31 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:31.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:31 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:35:31.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:31 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:31.431 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:31 vm06 ceph-mon[54477]: pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T08:35:31.431 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:31 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:31.431 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:31 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:31.431 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:31 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:31.431 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:31 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:35:31.431 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:31 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:31.533 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.b/config 2026-03-10T08:35:32.089 INFO:journalctl@ceph.osd.4.vm06.stdout:Mar 10 08:35:31 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-4[59063]: 2026-03-10T08:35:31.720+0000 7fe828be3740 -1 osd.4 0 log_to_monitors true 2026-03-10T08:35:32.414 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:32 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:35:32.414 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:32 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:32.414 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:32 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:32.414 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:32 vm06 ceph-mon[54477]: from='osd.4 v1:192.168.123.106:6800/4000324195' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T08:35:32.414 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:32 vm06 ceph-mon[54477]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T08:35:32.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:32 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:35:32.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:32 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:32.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:32 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:32.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:32 vm03 ceph-mon[50703]: from='osd.4 v1:192.168.123.106:6800/4000324195' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T08:35:32.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:32 vm03 ceph-mon[50703]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T08:35:32.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:32 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:35:32.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:32 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:32.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:32 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:32.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:32 vm03 ceph-mon[57160]: from='osd.4 v1:192.168.123.106:6800/4000324195' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T08:35:32.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:32 vm03 ceph-mon[57160]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T08:35:32.975 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:35:33.000 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- ceph orch daemon add osd vm06:/dev/vdd 2026-03-10T08:35:33.185 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.b/config 2026-03-10T08:35:33.214 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:33 vm06 ceph-mon[54477]: from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-10T08:35:33.214 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:33 vm06 ceph-mon[54477]: osdmap e25: 5 total, 4 up, 5 in 2026-03-10T08:35:33.214 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:33 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T08:35:33.214 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:33 vm06 ceph-mon[54477]: from='osd.4 v1:192.168.123.106:6800/4000324195' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-10T08:35:33.214 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:33 vm06 ceph-mon[54477]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-10T08:35:33.214 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:33 vm06 ceph-mon[54477]: Detected new or changed devices on vm06 2026-03-10T08:35:33.214 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:33 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:33.214 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:33 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:33.214 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:33 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T08:35:33.214 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:33 vm06 ceph-mon[54477]: Adjusting osd_memory_target on vm06 to 257.0M 2026-03-10T08:35:33.214 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:33 vm06 ceph-mon[54477]: Unable to set osd_memory_target on vm06 to 269536460: error parsing value: Value '269536460' is below minimum 939524096 2026-03-10T08:35:33.214 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:33 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:33.214 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:33 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:35:33.214 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:33 vm06 ceph-mon[54477]: pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T08:35:33.214 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:33 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:33.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:33 vm03 ceph-mon[57160]: from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-10T08:35:33.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:33 vm03 ceph-mon[57160]: osdmap e25: 5 total, 4 up, 5 in 2026-03-10T08:35:33.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:33 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T08:35:33.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:33 vm03 ceph-mon[57160]: from='osd.4 v1:192.168.123.106:6800/4000324195' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-10T08:35:33.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:33 vm03 ceph-mon[57160]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-10T08:35:33.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:33 vm03 ceph-mon[57160]: Detected new or changed devices on vm06 2026-03-10T08:35:33.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:33 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:33.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:33 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:33.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:33 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T08:35:33.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:33 vm03 ceph-mon[57160]: Adjusting osd_memory_target on vm06 to 257.0M 2026-03-10T08:35:33.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:33 vm03 ceph-mon[57160]: Unable to set osd_memory_target on vm06 to 269536460: error parsing value: Value '269536460' is below minimum 939524096 2026-03-10T08:35:33.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:33 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:33.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:33 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:35:33.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:33 vm03 ceph-mon[57160]: pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T08:35:33.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:33 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:33.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:33 vm03 ceph-mon[50703]: from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-10T08:35:33.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:33 vm03 ceph-mon[50703]: osdmap e25: 5 total, 4 up, 5 in 2026-03-10T08:35:33.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:33 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T08:35:33.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:33 vm03 ceph-mon[50703]: from='osd.4 v1:192.168.123.106:6800/4000324195' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-10T08:35:33.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:33 vm03 ceph-mon[50703]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-10T08:35:33.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:33 vm03 ceph-mon[50703]: Detected new or changed devices on vm06 2026-03-10T08:35:33.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:33 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:33.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:33 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:33.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:33 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T08:35:33.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:33 vm03 ceph-mon[50703]: Adjusting osd_memory_target on vm06 to 257.0M 2026-03-10T08:35:33.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:33 vm03 ceph-mon[50703]: Unable to set osd_memory_target on vm06 to 269536460: error parsing value: Value '269536460' is below minimum 939524096 2026-03-10T08:35:33.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:33 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:33.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:33 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:35:33.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:33 vm03 ceph-mon[50703]: pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T08:35:33.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:33 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:34.403 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:34 vm06 ceph-mon[54477]: from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm06", "root=default"]}]': finished 2026-03-10T08:35:34.403 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:34 vm06 ceph-mon[54477]: osdmap e26: 5 total, 4 up, 5 in 2026-03-10T08:35:34.403 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:34 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T08:35:34.403 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:34 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T08:35:34.403 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:34 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T08:35:34.403 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:34 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T08:35:34.403 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:34 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:34.403 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:34 vm06 ceph-mon[54477]: osdmap e27: 5 total, 4 up, 5 in 2026-03-10T08:35:34.403 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:34 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T08:35:34.403 INFO:journalctl@ceph.osd.4.vm06.stdout:Mar 10 08:35:34 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-4[59063]: 2026-03-10T08:35:34.209+0000 7fe825377640 -1 osd.4 0 waiting for initial osdmap 2026-03-10T08:35:34.403 INFO:journalctl@ceph.osd.4.vm06.stdout:Mar 10 08:35:34 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-4[59063]: 2026-03-10T08:35:34.222+0000 7fe82018d640 -1 osd.4 27 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T08:35:34.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:34 vm03 ceph-mon[57160]: from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm06", "root=default"]}]': finished 2026-03-10T08:35:34.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:34 vm03 ceph-mon[57160]: osdmap e26: 5 total, 4 up, 5 in 2026-03-10T08:35:34.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:34 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T08:35:34.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:34 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T08:35:34.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:34 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T08:35:34.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:34 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T08:35:34.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:34 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:34.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:34 vm03 ceph-mon[57160]: osdmap e27: 5 total, 4 up, 5 in 2026-03-10T08:35:34.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:34 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T08:35:34.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:34 vm03 ceph-mon[50703]: from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm06", "root=default"]}]': finished 2026-03-10T08:35:34.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:34 vm03 ceph-mon[50703]: osdmap e26: 5 total, 4 up, 5 in 2026-03-10T08:35:34.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:34 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T08:35:34.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:34 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T08:35:34.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:34 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T08:35:34.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:34 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T08:35:34.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:34 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:34.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:34 vm03 ceph-mon[50703]: osdmap e27: 5 total, 4 up, 5 in 2026-03-10T08:35:34.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:34 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T08:35:35.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:35 vm06 ceph-mon[54477]: purged_snaps scrub starts 2026-03-10T08:35:35.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:35 vm06 ceph-mon[54477]: purged_snaps scrub ok 2026-03-10T08:35:35.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:35 vm06 ceph-mon[54477]: from='client.14352 v1:192.168.123.106:0/1916915674' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm06:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:35:35.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:35 vm06 ceph-mon[54477]: from='osd.4 ' entity='osd.4' 2026-03-10T08:35:35.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:35 vm06 ceph-mon[54477]: pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T08:35:35.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:35 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.106:0/46254341' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c642ac75-11a4-4a8e-9c52-98e98f045bad"}]: dispatch 2026-03-10T08:35:35.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:35 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.106:0/46254341' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c642ac75-11a4-4a8e-9c52-98e98f045bad"}]': finished 2026-03-10T08:35:35.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:35 vm06 ceph-mon[54477]: osd.4 v1:192.168.123.106:6800/4000324195 boot 2026-03-10T08:35:35.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:35 vm06 ceph-mon[54477]: osdmap e28: 6 total, 5 up, 6 in 2026-03-10T08:35:35.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:35 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T08:35:35.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:35 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T08:35:35.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:35 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.106:0/2116979404' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T08:35:35.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:35 vm03 ceph-mon[57160]: purged_snaps scrub starts 2026-03-10T08:35:35.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:35 vm03 ceph-mon[57160]: purged_snaps scrub ok 2026-03-10T08:35:35.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:35 vm03 ceph-mon[57160]: from='client.14352 v1:192.168.123.106:0/1916915674' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm06:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:35:35.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:35 vm03 ceph-mon[57160]: from='osd.4 ' entity='osd.4' 2026-03-10T08:35:35.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:35 vm03 ceph-mon[57160]: pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T08:35:35.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:35 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.106:0/46254341' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c642ac75-11a4-4a8e-9c52-98e98f045bad"}]: dispatch 2026-03-10T08:35:35.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:35 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.106:0/46254341' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c642ac75-11a4-4a8e-9c52-98e98f045bad"}]': finished 2026-03-10T08:35:35.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:35 vm03 ceph-mon[57160]: osd.4 v1:192.168.123.106:6800/4000324195 boot 2026-03-10T08:35:35.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:35 vm03 ceph-mon[57160]: osdmap e28: 6 total, 5 up, 6 in 2026-03-10T08:35:35.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:35 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T08:35:35.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:35 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T08:35:35.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:35 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.106:0/2116979404' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T08:35:35.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:35 vm03 ceph-mon[50703]: purged_snaps scrub starts 2026-03-10T08:35:35.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:35 vm03 ceph-mon[50703]: purged_snaps scrub ok 2026-03-10T08:35:35.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:35 vm03 ceph-mon[50703]: from='client.14352 v1:192.168.123.106:0/1916915674' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm06:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:35:35.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:35 vm03 ceph-mon[50703]: from='osd.4 ' entity='osd.4' 2026-03-10T08:35:35.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:35 vm03 ceph-mon[50703]: pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T08:35:35.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:35 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.106:0/46254341' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c642ac75-11a4-4a8e-9c52-98e98f045bad"}]: dispatch 2026-03-10T08:35:35.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:35 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.106:0/46254341' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c642ac75-11a4-4a8e-9c52-98e98f045bad"}]': finished 2026-03-10T08:35:35.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:35 vm03 ceph-mon[50703]: osd.4 v1:192.168.123.106:6800/4000324195 boot 2026-03-10T08:35:35.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:35 vm03 ceph-mon[50703]: osdmap e28: 6 total, 5 up, 6 in 2026-03-10T08:35:35.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:35 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T08:35:35.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:35 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T08:35:35.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:35 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.106:0/2116979404' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T08:35:36.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:36 vm03 ceph-mon[57160]: osdmap e29: 6 total, 5 up, 6 in 2026-03-10T08:35:36.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:36 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T08:35:36.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:36 vm03 ceph-mon[57160]: pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail 2026-03-10T08:35:36.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:36 vm03 ceph-mon[50703]: osdmap e29: 6 total, 5 up, 6 in 2026-03-10T08:35:36.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:36 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T08:35:36.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:36 vm03 ceph-mon[50703]: pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail 2026-03-10T08:35:36.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:36 vm06 ceph-mon[54477]: osdmap e29: 6 total, 5 up, 6 in 2026-03-10T08:35:36.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:36 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T08:35:36.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:36 vm06 ceph-mon[54477]: pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail 2026-03-10T08:35:37.556 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:37 vm06 ceph-mon[54477]: osdmap e30: 6 total, 5 up, 6 in 2026-03-10T08:35:37.556 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:37 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T08:35:37.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:37 vm03 ceph-mon[57160]: osdmap e30: 6 total, 5 up, 6 in 2026-03-10T08:35:37.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:37 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T08:35:37.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:37 vm03 ceph-mon[50703]: osdmap e30: 6 total, 5 up, 6 in 2026-03-10T08:35:37.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:37 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T08:35:38.548 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:38 vm06 ceph-mon[54477]: pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 111 KiB/s, 0 objects/s recovering 2026-03-10T08:35:38.548 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:38 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T08:35:38.548 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:38 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:38.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:38 vm03 ceph-mon[57160]: pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 111 KiB/s, 0 objects/s recovering 2026-03-10T08:35:38.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:38 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T08:35:38.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:38 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:38.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:38 vm03 ceph-mon[50703]: pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 111 KiB/s, 0 objects/s recovering 2026-03-10T08:35:38.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:38 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T08:35:38.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:38 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:39.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:39 vm03 ceph-mon[57160]: Deploying daemon osd.5 on vm06 2026-03-10T08:35:39.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:39 vm03 ceph-mon[50703]: Deploying daemon osd.5 on vm06 2026-03-10T08:35:39.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:39 vm06 ceph-mon[54477]: Deploying daemon osd.5 on vm06 2026-03-10T08:35:40.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:40 vm06 ceph-mon[54477]: pgmap v65: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 75 KiB/s, 0 objects/s recovering 2026-03-10T08:35:40.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:40 vm03 ceph-mon[57160]: pgmap v65: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 75 KiB/s, 0 objects/s recovering 2026-03-10T08:35:40.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:40 vm03 ceph-mon[50703]: pgmap v65: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 75 KiB/s, 0 objects/s recovering 2026-03-10T08:35:41.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:41 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:35:41.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:41 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:41.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:41 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:41.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:41 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:41.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:41 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:41.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:41 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:41.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:41 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:35:41.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:41 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:41.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:41 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:35:41.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:41 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:41.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:41 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:41.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:41 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:41.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:41 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:41.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:41 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:41.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:41 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:35:41.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:41 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:41.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:41 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:35:41.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:41 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:41.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:41 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:41.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:41 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:41.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:41 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:41.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:41 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:41.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:41 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:35:41.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:41 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:41.760 INFO:teuthology.orchestra.run.vm06.stdout:Created osd(s) 5 on host 'vm06' 2026-03-10T08:35:41.833 DEBUG:teuthology.orchestra.run.vm06:osd.5> sudo journalctl -f -n 0 -u ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@osd.5.service 2026-03-10T08:35:41.835 INFO:tasks.cephadm:Deploying osd.6 on vm06 with /dev/vdc... 2026-03-10T08:35:41.835 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- lvm zap /dev/vdc 2026-03-10T08:35:42.158 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.b/config 2026-03-10T08:35:42.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:42 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:35:42.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:42 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:42.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:42 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:42.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:42 vm06 ceph-mon[54477]: pgmap v66: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 57 KiB/s, 0 objects/s recovering 2026-03-10T08:35:42.590 INFO:journalctl@ceph.osd.5.vm06.stdout:Mar 10 08:35:42 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-5[64235]: 2026-03-10T08:35:42.429+0000 7fbfc4895740 -1 osd.5 0 log_to_monitors true 2026-03-10T08:35:42.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:42 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:35:42.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:42 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:42.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:42 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:42.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:42 vm03 ceph-mon[57160]: pgmap v66: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 57 KiB/s, 0 objects/s recovering 2026-03-10T08:35:42.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:42 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:35:42.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:42 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:42.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:42 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:42.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:42 vm03 ceph-mon[50703]: pgmap v66: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 57 KiB/s, 0 objects/s recovering 2026-03-10T08:35:43.618 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:35:43.642 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- ceph orch daemon add osd vm06:/dev/vdc 2026-03-10T08:35:43.675 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:43 vm06 ceph-mon[54477]: from='osd.5 v1:192.168.123.106:6804/74091533' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T08:35:43.675 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:43 vm06 ceph-mon[54477]: Detected new or changed devices on vm06 2026-03-10T08:35:43.675 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:43 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:43.675 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:43 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:43.675 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:43 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T08:35:43.675 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:43 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T08:35:43.675 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:43 vm06 ceph-mon[54477]: Adjusting osd_memory_target on vm06 to 128.5M 2026-03-10T08:35:43.675 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:43 vm06 ceph-mon[54477]: Unable to set osd_memory_target on vm06 to 134768230: error parsing value: Value '134768230' is below minimum 939524096 2026-03-10T08:35:43.675 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:43 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:43.675 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:43 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:35:43.675 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:43 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:43.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:43 vm03 ceph-mon[57160]: from='osd.5 v1:192.168.123.106:6804/74091533' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T08:35:43.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:43 vm03 ceph-mon[57160]: Detected new or changed devices on vm06 2026-03-10T08:35:43.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:43 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:43.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:43 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:43.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:43 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T08:35:43.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:43 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T08:35:43.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:43 vm03 ceph-mon[57160]: Adjusting osd_memory_target on vm06 to 128.5M 2026-03-10T08:35:43.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:43 vm03 ceph-mon[57160]: Unable to set osd_memory_target on vm06 to 134768230: error parsing value: Value '134768230' is below minimum 939524096 2026-03-10T08:35:43.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:43 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:43.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:43 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:35:43.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:43 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:43.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:43 vm03 ceph-mon[50703]: from='osd.5 v1:192.168.123.106:6804/74091533' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T08:35:43.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:43 vm03 ceph-mon[50703]: Detected new or changed devices on vm06 2026-03-10T08:35:43.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:43 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:43.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:43 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:43.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:43 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T08:35:43.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:43 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T08:35:43.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:43 vm03 ceph-mon[50703]: Adjusting osd_memory_target on vm06 to 128.5M 2026-03-10T08:35:43.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:43 vm03 ceph-mon[50703]: Unable to set osd_memory_target on vm06 to 134768230: error parsing value: Value '134768230' is below minimum 939524096 2026-03-10T08:35:43.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:43 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:43.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:43 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:35:43.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:43 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:43.841 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.b/config 2026-03-10T08:35:44.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:44 vm06 ceph-mon[54477]: from='osd.5 v1:192.168.123.106:6804/74091533' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-10T08:35:44.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:44 vm06 ceph-mon[54477]: osdmap e31: 6 total, 5 up, 6 in 2026-03-10T08:35:44.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:44 vm06 ceph-mon[54477]: from='osd.5 v1:192.168.123.106:6804/74091533' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-10T08:35:44.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:44 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T08:35:44.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:44 vm06 ceph-mon[54477]: from='client.24274 v1:192.168.123.106:0/1697810804' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm06:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:35:44.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:44 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T08:35:44.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:44 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T08:35:44.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:44 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:44.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:44 vm06 ceph-mon[54477]: pgmap v68: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-10T08:35:44.590 INFO:journalctl@ceph.osd.5.vm06.stdout:Mar 10 08:35:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-5[64235]: 2026-03-10T08:35:44.423+0000 7fbfc0816640 -1 osd.5 0 waiting for initial osdmap 2026-03-10T08:35:44.590 INFO:journalctl@ceph.osd.5.vm06.stdout:Mar 10 08:35:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-5[64235]: 2026-03-10T08:35:44.429+0000 7fbfbc640640 -1 osd.5 32 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T08:35:44.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:44 vm03 ceph-mon[57160]: from='osd.5 v1:192.168.123.106:6804/74091533' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-10T08:35:44.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:44 vm03 ceph-mon[57160]: osdmap e31: 6 total, 5 up, 6 in 2026-03-10T08:35:44.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:44 vm03 ceph-mon[57160]: from='osd.5 v1:192.168.123.106:6804/74091533' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-10T08:35:44.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:44 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T08:35:44.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:44 vm03 ceph-mon[57160]: from='client.24274 v1:192.168.123.106:0/1697810804' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm06:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:35:44.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:44 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T08:35:44.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:44 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T08:35:44.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:44 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:44.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:44 vm03 ceph-mon[57160]: pgmap v68: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-10T08:35:44.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:44 vm03 ceph-mon[50703]: from='osd.5 v1:192.168.123.106:6804/74091533' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-10T08:35:44.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:44 vm03 ceph-mon[50703]: osdmap e31: 6 total, 5 up, 6 in 2026-03-10T08:35:44.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:44 vm03 ceph-mon[50703]: from='osd.5 v1:192.168.123.106:6804/74091533' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-10T08:35:44.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:44 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T08:35:44.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:44 vm03 ceph-mon[50703]: from='client.24274 v1:192.168.123.106:0/1697810804' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm06:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:35:44.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:44 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T08:35:44.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:44 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T08:35:44.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:44 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:44.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:44 vm03 ceph-mon[50703]: pgmap v68: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-10T08:35:45.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:45 vm03 ceph-mon[57160]: from='osd.5 v1:192.168.123.106:6804/74091533' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm06", "root=default"]}]': finished 2026-03-10T08:35:45.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:45 vm03 ceph-mon[57160]: osdmap e32: 6 total, 5 up, 6 in 2026-03-10T08:35:45.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:45 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T08:35:45.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:45 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T08:35:45.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:45 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.106:0/2143243092' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2c29ba87-17ff-4e7b-aed3-65d6fd9b7afe"}]: dispatch 2026-03-10T08:35:45.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:45 vm03 ceph-mon[57160]: from='client.24280 ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2c29ba87-17ff-4e7b-aed3-65d6fd9b7afe"}]: dispatch 2026-03-10T08:35:45.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:45 vm03 ceph-mon[57160]: from='client.24280 ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "2c29ba87-17ff-4e7b-aed3-65d6fd9b7afe"}]': finished 2026-03-10T08:35:45.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:45 vm03 ceph-mon[57160]: osd.5 v1:192.168.123.106:6804/74091533 boot 2026-03-10T08:35:45.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:45 vm03 ceph-mon[57160]: osdmap e33: 7 total, 6 up, 7 in 2026-03-10T08:35:45.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:45 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T08:35:45.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:45 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T08:35:45.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:45 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.106:0/2511423422' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T08:35:45.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:45 vm03 ceph-mon[50703]: from='osd.5 v1:192.168.123.106:6804/74091533' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm06", "root=default"]}]': finished 2026-03-10T08:35:45.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:45 vm03 ceph-mon[50703]: osdmap e32: 6 total, 5 up, 6 in 2026-03-10T08:35:45.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:45 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T08:35:45.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:45 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T08:35:45.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:45 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.106:0/2143243092' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2c29ba87-17ff-4e7b-aed3-65d6fd9b7afe"}]: dispatch 2026-03-10T08:35:45.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:45 vm03 ceph-mon[50703]: from='client.24280 ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2c29ba87-17ff-4e7b-aed3-65d6fd9b7afe"}]: dispatch 2026-03-10T08:35:45.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:45 vm03 ceph-mon[50703]: from='client.24280 ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "2c29ba87-17ff-4e7b-aed3-65d6fd9b7afe"}]': finished 2026-03-10T08:35:45.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:45 vm03 ceph-mon[50703]: osd.5 v1:192.168.123.106:6804/74091533 boot 2026-03-10T08:35:45.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:45 vm03 ceph-mon[50703]: osdmap e33: 7 total, 6 up, 7 in 2026-03-10T08:35:45.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:45 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T08:35:45.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:45 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T08:35:45.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:45 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.106:0/2511423422' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T08:35:45.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:45 vm06 ceph-mon[54477]: from='osd.5 v1:192.168.123.106:6804/74091533' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm06", "root=default"]}]': finished 2026-03-10T08:35:45.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:45 vm06 ceph-mon[54477]: osdmap e32: 6 total, 5 up, 6 in 2026-03-10T08:35:45.840 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:45 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T08:35:45.840 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:45 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T08:35:45.840 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:45 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.106:0/2143243092' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2c29ba87-17ff-4e7b-aed3-65d6fd9b7afe"}]: dispatch 2026-03-10T08:35:45.840 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:45 vm06 ceph-mon[54477]: from='client.24280 ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2c29ba87-17ff-4e7b-aed3-65d6fd9b7afe"}]: dispatch 2026-03-10T08:35:45.840 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:45 vm06 ceph-mon[54477]: from='client.24280 ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "2c29ba87-17ff-4e7b-aed3-65d6fd9b7afe"}]': finished 2026-03-10T08:35:45.840 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:45 vm06 ceph-mon[54477]: osd.5 v1:192.168.123.106:6804/74091533 boot 2026-03-10T08:35:45.840 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:45 vm06 ceph-mon[54477]: osdmap e33: 7 total, 6 up, 7 in 2026-03-10T08:35:45.840 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:45 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T08:35:45.840 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:45 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T08:35:45.840 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:45 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.106:0/2511423422' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T08:35:46.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:46 vm06 ceph-mon[54477]: purged_snaps scrub starts 2026-03-10T08:35:46.840 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:46 vm06 ceph-mon[54477]: purged_snaps scrub ok 2026-03-10T08:35:46.840 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:46 vm06 ceph-mon[54477]: osdmap e34: 7 total, 6 up, 7 in 2026-03-10T08:35:46.840 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:46 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T08:35:46.840 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:46 vm06 ceph-mon[54477]: pgmap v72: 1 pgs: 1 remapped; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-10T08:35:46.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:46 vm03 ceph-mon[57160]: purged_snaps scrub starts 2026-03-10T08:35:46.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:46 vm03 ceph-mon[57160]: purged_snaps scrub ok 2026-03-10T08:35:46.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:46 vm03 ceph-mon[57160]: osdmap e34: 7 total, 6 up, 7 in 2026-03-10T08:35:46.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:46 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T08:35:46.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:46 vm03 ceph-mon[57160]: pgmap v72: 1 pgs: 1 remapped; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-10T08:35:46.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:46 vm03 ceph-mon[50703]: purged_snaps scrub starts 2026-03-10T08:35:46.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:46 vm03 ceph-mon[50703]: purged_snaps scrub ok 2026-03-10T08:35:46.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:46 vm03 ceph-mon[50703]: osdmap e34: 7 total, 6 up, 7 in 2026-03-10T08:35:46.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:46 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T08:35:46.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:46 vm03 ceph-mon[50703]: pgmap v72: 1 pgs: 1 remapped; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-10T08:35:48.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:47 vm06 ceph-mon[54477]: osdmap e35: 7 total, 6 up, 7 in 2026-03-10T08:35:48.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:47 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T08:35:48.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:47 vm03 ceph-mon[57160]: osdmap e35: 7 total, 6 up, 7 in 2026-03-10T08:35:48.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:47 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T08:35:48.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:47 vm03 ceph-mon[50703]: osdmap e35: 7 total, 6 up, 7 in 2026-03-10T08:35:48.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:47 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T08:35:49.422 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:49 vm06 ceph-mon[54477]: pgmap v74: 1 pgs: 1 remapped; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-10T08:35:49.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:49 vm03 ceph-mon[57160]: pgmap v74: 1 pgs: 1 remapped; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-10T08:35:49.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:49 vm03 ceph-mon[50703]: pgmap v74: 1 pgs: 1 remapped; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-10T08:35:50.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:50 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T08:35:50.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:50 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:50.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:50 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T08:35:50.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:50 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:50.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:50 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T08:35:50.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:50 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:51.525 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:51 vm06 ceph-mon[54477]: Deploying daemon osd.6 on vm06 2026-03-10T08:35:51.526 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:51 vm06 ceph-mon[54477]: pgmap v75: 1 pgs: 1 remapped; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-10T08:35:51.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:51 vm03 ceph-mon[57160]: Deploying daemon osd.6 on vm06 2026-03-10T08:35:51.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:51 vm03 ceph-mon[57160]: pgmap v75: 1 pgs: 1 remapped; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-10T08:35:51.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:51 vm03 ceph-mon[50703]: Deploying daemon osd.6 on vm06 2026-03-10T08:35:51.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:51 vm03 ceph-mon[50703]: pgmap v75: 1 pgs: 1 remapped; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-10T08:35:52.759 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:52 vm06 ceph-mon[54477]: pgmap v76: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-10T08:35:52.759 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:52 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:35:52.759 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:52 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:52.759 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:52 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:52.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:52 vm03 ceph-mon[57160]: pgmap v76: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-10T08:35:52.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:52 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:35:52.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:52 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:52.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:52 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:52.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:52 vm03 ceph-mon[50703]: pgmap v76: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-10T08:35:52.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:52 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:35:52.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:52 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:52.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:52 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:53.709 INFO:teuthology.orchestra.run.vm06.stdout:Created osd(s) 6 on host 'vm06' 2026-03-10T08:35:53.784 DEBUG:teuthology.orchestra.run.vm06:osd.6> sudo journalctl -f -n 0 -u ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@osd.6.service 2026-03-10T08:35:53.825 INFO:tasks.cephadm:Deploying osd.7 on vm06 with /dev/vdb... 2026-03-10T08:35:53.826 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- lvm zap /dev/vdb 2026-03-10T08:35:54.130 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.b/config 2026-03-10T08:35:54.156 INFO:journalctl@ceph.osd.6.vm06.stdout:Mar 10 08:35:54 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-6[69225]: 2026-03-10T08:35:54.019+0000 7fb041b45740 -1 Falling back to public interface 2026-03-10T08:35:54.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:54 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:54.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:54 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:54.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:54 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:54.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:54 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:35:54.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:54 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:54.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:54 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:35:54.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:54 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:54.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:54 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:54.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:54 vm06 ceph-mon[54477]: pgmap v77: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 54 KiB/s, 0 objects/s recovering 2026-03-10T08:35:54.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:54 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:54.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:54 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:54.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:54 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:54.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:54 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:35:54.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:54 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:54.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:54 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:35:54.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:54 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:54.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:54 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:54.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:54 vm03 ceph-mon[57160]: pgmap v77: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 54 KiB/s, 0 objects/s recovering 2026-03-10T08:35:54.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:54 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:54.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:54 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:54.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:54 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:54.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:54 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:35:54.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:54 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:54.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:54 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:35:54.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:54 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:54.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:54 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:54.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:54 vm03 ceph-mon[50703]: pgmap v77: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 54 KiB/s, 0 objects/s recovering 2026-03-10T08:35:55.340 INFO:journalctl@ceph.osd.6.vm06.stdout:Mar 10 08:35:55 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-6[69225]: 2026-03-10T08:35:55.229+0000 7fb041b45740 -1 osd.6 0 log_to_monitors true 2026-03-10T08:35:55.832 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:35:55.855 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- ceph orch daemon add osd vm06:/dev/vdb 2026-03-10T08:35:56.054 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:56 vm06 ceph-mon[54477]: Detected new or changed devices on vm06 2026-03-10T08:35:56.054 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:56 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:56.054 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:56 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:56.054 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:56 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T08:35:56.054 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:56 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T08:35:56.054 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:56 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T08:35:56.054 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:56 vm06 ceph-mon[54477]: Adjusting osd_memory_target on vm06 to 87739k 2026-03-10T08:35:56.054 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:56 vm06 ceph-mon[54477]: Unable to set osd_memory_target on vm06 to 89845486: error parsing value: Value '89845486' is below minimum 939524096 2026-03-10T08:35:56.054 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:56 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:56.054 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:56 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:35:56.054 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:56 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:56.054 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:56 vm06 ceph-mon[54477]: from='osd.6 v1:192.168.123.106:6808/3799364875' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T08:35:56.054 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:56 vm06 ceph-mon[54477]: from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T08:35:56.083 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.b/config 2026-03-10T08:35:56.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:56 vm03 ceph-mon[57160]: Detected new or changed devices on vm06 2026-03-10T08:35:56.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:56 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:56.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:56 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:56.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:56 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T08:35:56.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:56 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T08:35:56.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:56 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T08:35:56.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:56 vm03 ceph-mon[57160]: Adjusting osd_memory_target on vm06 to 87739k 2026-03-10T08:35:56.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:56 vm03 ceph-mon[57160]: Unable to set osd_memory_target on vm06 to 89845486: error parsing value: Value '89845486' is below minimum 939524096 2026-03-10T08:35:56.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:56 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:56.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:56 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:35:56.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:56 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:56.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:56 vm03 ceph-mon[57160]: from='osd.6 v1:192.168.123.106:6808/3799364875' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T08:35:56.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:56 vm03 ceph-mon[57160]: from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T08:35:56.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:56 vm03 ceph-mon[50703]: Detected new or changed devices on vm06 2026-03-10T08:35:56.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:56 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:56.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:56 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:56.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:56 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T08:35:56.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:56 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T08:35:56.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:56 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T08:35:56.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:56 vm03 ceph-mon[50703]: Adjusting osd_memory_target on vm06 to 87739k 2026-03-10T08:35:56.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:56 vm03 ceph-mon[50703]: Unable to set osd_memory_target on vm06 to 89845486: error parsing value: Value '89845486' is below minimum 939524096 2026-03-10T08:35:56.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:56 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:56.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:56 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:35:56.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:56 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:35:56.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:56 vm03 ceph-mon[50703]: from='osd.6 v1:192.168.123.106:6808/3799364875' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T08:35:56.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:56 vm03 ceph-mon[50703]: from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T08:35:57.284 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:57 vm06 ceph-mon[54477]: from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-10T08:35:57.284 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:57 vm06 ceph-mon[54477]: osdmap e36: 7 total, 6 up, 7 in 2026-03-10T08:35:57.284 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:57 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T08:35:57.284 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:57 vm06 ceph-mon[54477]: from='osd.6 v1:192.168.123.106:6808/3799364875' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-10T08:35:57.284 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:57 vm06 ceph-mon[54477]: from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-10T08:35:57.284 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:57 vm06 ceph-mon[54477]: pgmap v79: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 48 KiB/s, 0 objects/s recovering 2026-03-10T08:35:57.284 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:57 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T08:35:57.284 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:57 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T08:35:57.284 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:57 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:57.284 INFO:journalctl@ceph.osd.6.vm06.stdout:Mar 10 08:35:57 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-6[69225]: 2026-03-10T08:35:57.076+0000 7fb03dac6640 -1 osd.6 0 waiting for initial osdmap 2026-03-10T08:35:57.284 INFO:journalctl@ceph.osd.6.vm06.stdout:Mar 10 08:35:57 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-6[69225]: 2026-03-10T08:35:57.090+0000 7fb0398f0640 -1 osd.6 37 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T08:35:57.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:57 vm03 ceph-mon[57160]: from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-10T08:35:57.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:57 vm03 ceph-mon[57160]: osdmap e36: 7 total, 6 up, 7 in 2026-03-10T08:35:57.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:57 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T08:35:57.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:57 vm03 ceph-mon[57160]: from='osd.6 v1:192.168.123.106:6808/3799364875' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-10T08:35:57.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:57 vm03 ceph-mon[57160]: from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-10T08:35:57.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:57 vm03 ceph-mon[57160]: pgmap v79: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 48 KiB/s, 0 objects/s recovering 2026-03-10T08:35:57.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:57 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T08:35:57.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:57 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T08:35:57.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:57 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:57.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:57 vm03 ceph-mon[50703]: from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-10T08:35:57.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:57 vm03 ceph-mon[50703]: osdmap e36: 7 total, 6 up, 7 in 2026-03-10T08:35:57.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:57 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T08:35:57.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:57 vm03 ceph-mon[50703]: from='osd.6 v1:192.168.123.106:6808/3799364875' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-10T08:35:57.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:57 vm03 ceph-mon[50703]: from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-10T08:35:57.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:57 vm03 ceph-mon[50703]: pgmap v79: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 48 KiB/s, 0 objects/s recovering 2026-03-10T08:35:57.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:57 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T08:35:57.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:57 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T08:35:57.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:57 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:35:58.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:58 vm06 ceph-mon[54477]: from='client.24301 v1:192.168.123.106:0/2649324767' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm06:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:35:58.340 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:58 vm06 ceph-mon[54477]: from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm06", "root=default"]}]': finished 2026-03-10T08:35:58.340 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:58 vm06 ceph-mon[54477]: osdmap e37: 7 total, 6 up, 7 in 2026-03-10T08:35:58.340 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:58 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T08:35:58.340 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:58 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.106:0/227140962' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "24d6a21c-5f7e-4d3e-b64c-8a5679e9e064"}]: dispatch 2026-03-10T08:35:58.340 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:58 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.106:0/227140962' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "24d6a21c-5f7e-4d3e-b64c-8a5679e9e064"}]': finished 2026-03-10T08:35:58.340 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:58 vm06 ceph-mon[54477]: osd.6 v1:192.168.123.106:6808/3799364875 boot 2026-03-10T08:35:58.340 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:58 vm06 ceph-mon[54477]: osdmap e38: 8 total, 7 up, 8 in 2026-03-10T08:35:58.340 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:58 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T08:35:58.340 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:58 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T08:35:58.340 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:58 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.106:0/2412971950' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T08:35:58.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:58 vm03 ceph-mon[57160]: from='client.24301 v1:192.168.123.106:0/2649324767' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm06:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:35:58.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:58 vm03 ceph-mon[57160]: from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm06", "root=default"]}]': finished 2026-03-10T08:35:58.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:58 vm03 ceph-mon[57160]: osdmap e37: 7 total, 6 up, 7 in 2026-03-10T08:35:58.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:58 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T08:35:58.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:58 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.106:0/227140962' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "24d6a21c-5f7e-4d3e-b64c-8a5679e9e064"}]: dispatch 2026-03-10T08:35:58.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:58 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.106:0/227140962' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "24d6a21c-5f7e-4d3e-b64c-8a5679e9e064"}]': finished 2026-03-10T08:35:58.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:58 vm03 ceph-mon[57160]: osd.6 v1:192.168.123.106:6808/3799364875 boot 2026-03-10T08:35:58.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:58 vm03 ceph-mon[57160]: osdmap e38: 8 total, 7 up, 8 in 2026-03-10T08:35:58.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:58 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T08:35:58.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:58 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T08:35:58.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:58 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.106:0/2412971950' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T08:35:58.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:58 vm03 ceph-mon[50703]: from='client.24301 v1:192.168.123.106:0/2649324767' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm06:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:35:58.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:58 vm03 ceph-mon[50703]: from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm06", "root=default"]}]': finished 2026-03-10T08:35:58.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:58 vm03 ceph-mon[50703]: osdmap e37: 7 total, 6 up, 7 in 2026-03-10T08:35:58.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:58 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T08:35:58.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:58 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.106:0/227140962' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "24d6a21c-5f7e-4d3e-b64c-8a5679e9e064"}]: dispatch 2026-03-10T08:35:58.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:58 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.106:0/227140962' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "24d6a21c-5f7e-4d3e-b64c-8a5679e9e064"}]': finished 2026-03-10T08:35:58.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:58 vm03 ceph-mon[50703]: osd.6 v1:192.168.123.106:6808/3799364875 boot 2026-03-10T08:35:58.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:58 vm03 ceph-mon[50703]: osdmap e38: 8 total, 7 up, 8 in 2026-03-10T08:35:58.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:58 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T08:35:58.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:58 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T08:35:58.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:58 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.106:0/2412971950' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T08:35:59.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:59 vm06 ceph-mon[54477]: purged_snaps scrub starts 2026-03-10T08:35:59.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:59 vm06 ceph-mon[54477]: purged_snaps scrub ok 2026-03-10T08:35:59.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:59 vm06 ceph-mon[54477]: osdmap e39: 8 total, 7 up, 8 in 2026-03-10T08:35:59.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:59 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T08:35:59.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:35:59 vm06 ceph-mon[54477]: pgmap v83: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T08:35:59.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:59 vm03 ceph-mon[57160]: purged_snaps scrub starts 2026-03-10T08:35:59.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:59 vm03 ceph-mon[57160]: purged_snaps scrub ok 2026-03-10T08:35:59.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:59 vm03 ceph-mon[57160]: osdmap e39: 8 total, 7 up, 8 in 2026-03-10T08:35:59.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:59 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T08:35:59.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:35:59 vm03 ceph-mon[57160]: pgmap v83: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T08:35:59.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:59 vm03 ceph-mon[50703]: purged_snaps scrub starts 2026-03-10T08:35:59.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:59 vm03 ceph-mon[50703]: purged_snaps scrub ok 2026-03-10T08:35:59.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:59 vm03 ceph-mon[50703]: osdmap e39: 8 total, 7 up, 8 in 2026-03-10T08:35:59.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:59 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T08:35:59.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:35:59 vm03 ceph-mon[50703]: pgmap v83: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T08:36:00.546 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:00 vm06 ceph-mon[54477]: osdmap e40: 8 total, 7 up, 8 in 2026-03-10T08:36:00.546 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:00 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T08:36:00.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:00 vm03 ceph-mon[57160]: osdmap e40: 8 total, 7 up, 8 in 2026-03-10T08:36:00.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:00 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T08:36:00.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:00 vm03 ceph-mon[50703]: osdmap e40: 8 total, 7 up, 8 in 2026-03-10T08:36:00.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:00 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T08:36:01.503 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:01 vm06 ceph-mon[54477]: pgmap v85: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T08:36:01.503 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:01 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T08:36:01.503 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:01 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:36:01.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:01 vm03 ceph-mon[57160]: pgmap v85: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T08:36:01.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:01 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T08:36:01.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:01 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:36:01.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:01 vm03 ceph-mon[50703]: pgmap v85: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T08:36:01.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:01 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T08:36:01.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:01 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:36:02.446 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:02 vm06 ceph-mon[54477]: Deploying daemon osd.7 on vm06 2026-03-10T08:36:02.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:02 vm03 ceph-mon[57160]: Deploying daemon osd.7 on vm06 2026-03-10T08:36:02.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:02 vm03 ceph-mon[50703]: Deploying daemon osd.7 on vm06 2026-03-10T08:36:03.340 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:03 vm06 ceph-mon[54477]: pgmap v86: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T08:36:03.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:03 vm03 ceph-mon[57160]: pgmap v86: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T08:36:03.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:03 vm03 ceph-mon[50703]: pgmap v86: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T08:36:04.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:04 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:36:04.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:04 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:04.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:04 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:04.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:04 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:36:04.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:04 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:04.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:04 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:04.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:04 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:36:04.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:04 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:04.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:04 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:04.959 INFO:teuthology.orchestra.run.vm06.stdout:Created osd(s) 7 on host 'vm06' 2026-03-10T08:36:05.036 DEBUG:teuthology.orchestra.run.vm06:osd.7> sudo journalctl -f -n 0 -u ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@osd.7.service 2026-03-10T08:36:05.038 INFO:tasks.cephadm:Waiting for 8 OSDs to come up... 2026-03-10T08:36:05.038 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- ceph osd stat -f json 2026-03-10T08:36:05.233 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.a/config 2026-03-10T08:36:05.322 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:05 vm03 ceph-mon[57160]: pgmap v87: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T08:36:05.322 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:05 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:05.322 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:05 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:05.322 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:05 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:36:05.322 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:05 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:36:05.322 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:05 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:05.323 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:05 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:36:05.323 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:05 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:05.323 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:05 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:05.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:05 vm03 ceph-mon[50703]: pgmap v87: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T08:36:05.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:05 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:05.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:05 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:05.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:05 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:36:05.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:05 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:36:05.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:05 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:05.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:05 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:36:05.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:05 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:05.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:05 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:05.483 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:36:05.558 INFO:teuthology.orchestra.run.vm03.stdout:{"epoch":40,"num_osds":8,"num_up_osds":7,"osd_up_since":1773131757,"num_in_osds":8,"osd_in_since":1773131757,"num_remapped_pgs":0} 2026-03-10T08:36:05.575 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:05 vm06 ceph-mon[54477]: pgmap v87: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T08:36:05.576 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:05 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:05.576 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:05 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:05.576 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:05 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:36:05.576 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:05 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:36:05.576 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:05 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:05.576 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:05 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:36:05.576 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:05 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:05.576 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:05 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:05.840 INFO:journalctl@ceph.osd.7.vm06.stdout:Mar 10 08:36:05 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-7[74328]: 2026-03-10T08:36:05.573+0000 7f7890678740 -1 osd.7 0 log_to_monitors true 2026-03-10T08:36:06.313 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:06 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/3762786439' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T08:36:06.313 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:06 vm06 ceph-mon[54477]: from='osd.7 v1:192.168.123.106:6812/1491932823' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T08:36:06.313 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:06 vm06 ceph-mon[54477]: from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T08:36:06.313 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:06 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:06.313 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:06 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:06.313 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:06 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T08:36:06.313 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:06 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T08:36:06.313 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:06 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T08:36:06.313 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:06 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-10T08:36:06.313 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:06 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:36:06.313 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:06 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:36:06.313 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:06 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:06.559 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- ceph osd stat -f json 2026-03-10T08:36:06.585 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:06 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/3762786439' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T08:36:06.586 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:06 vm03 ceph-mon[50703]: from='osd.7 v1:192.168.123.106:6812/1491932823' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T08:36:06.586 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:06 vm03 ceph-mon[50703]: from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T08:36:06.586 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:06 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:06.586 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:06 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:06.586 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:06 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T08:36:06.586 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:06 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T08:36:06.586 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:06 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T08:36:06.586 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:06 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-10T08:36:06.586 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:06 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:36:06.586 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:06 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:36:06.586 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:06 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:06.586 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:06 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/3762786439' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T08:36:06.586 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:06 vm03 ceph-mon[57160]: from='osd.7 v1:192.168.123.106:6812/1491932823' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T08:36:06.586 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:06 vm03 ceph-mon[57160]: from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T08:36:06.586 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:06 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:06.586 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:06 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:06.586 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:06 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T08:36:06.586 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:06 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T08:36:06.586 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:06 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T08:36:06.586 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:06 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-10T08:36:06.586 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:06 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:36:06.586 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:06 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:36:06.586 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:06 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:06.758 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.a/config 2026-03-10T08:36:06.994 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:36:07.073 INFO:teuthology.orchestra.run.vm03.stdout:{"epoch":41,"num_osds":8,"num_up_osds":7,"osd_up_since":1773131757,"num_in_osds":8,"osd_in_since":1773131757,"num_remapped_pgs":0} 2026-03-10T08:36:07.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:07 vm06 ceph-mon[54477]: Detected new or changed devices on vm06 2026-03-10T08:36:07.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:07 vm06 ceph-mon[54477]: Adjusting osd_memory_target on vm06 to 65804k 2026-03-10T08:36:07.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:07 vm06 ceph-mon[54477]: Unable to set osd_memory_target on vm06 to 67384115: error parsing value: Value '67384115' is below minimum 939524096 2026-03-10T08:36:07.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:07 vm06 ceph-mon[54477]: pgmap v88: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T08:36:07.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:07 vm06 ceph-mon[54477]: from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-10T08:36:07.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:07 vm06 ceph-mon[54477]: from='osd.7 v1:192.168.123.106:6812/1491932823' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-10T08:36:07.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:07 vm06 ceph-mon[54477]: osdmap e41: 8 total, 7 up, 8 in 2026-03-10T08:36:07.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:07 vm06 ceph-mon[54477]: from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-10T08:36:07.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:07 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T08:36:07.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:07 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/197476228' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T08:36:07.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:07 vm06 ceph-mon[54477]: from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm06", "root=default"]}]': finished 2026-03-10T08:36:07.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:07 vm06 ceph-mon[54477]: osdmap e42: 8 total, 7 up, 8 in 2026-03-10T08:36:07.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:07 vm03 ceph-mon[57160]: Detected new or changed devices on vm06 2026-03-10T08:36:07.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:07 vm03 ceph-mon[57160]: Adjusting osd_memory_target on vm06 to 65804k 2026-03-10T08:36:07.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:07 vm03 ceph-mon[57160]: Unable to set osd_memory_target on vm06 to 67384115: error parsing value: Value '67384115' is below minimum 939524096 2026-03-10T08:36:07.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:07 vm03 ceph-mon[57160]: pgmap v88: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T08:36:07.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:07 vm03 ceph-mon[57160]: from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-10T08:36:07.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:07 vm03 ceph-mon[57160]: from='osd.7 v1:192.168.123.106:6812/1491932823' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-10T08:36:07.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:07 vm03 ceph-mon[57160]: osdmap e41: 8 total, 7 up, 8 in 2026-03-10T08:36:07.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:07 vm03 ceph-mon[57160]: from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-10T08:36:07.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:07 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T08:36:07.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:07 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/197476228' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T08:36:07.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:07 vm03 ceph-mon[57160]: from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm06", "root=default"]}]': finished 2026-03-10T08:36:07.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:07 vm03 ceph-mon[57160]: osdmap e42: 8 total, 7 up, 8 in 2026-03-10T08:36:07.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:07 vm03 ceph-mon[50703]: Detected new or changed devices on vm06 2026-03-10T08:36:07.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:07 vm03 ceph-mon[50703]: Adjusting osd_memory_target on vm06 to 65804k 2026-03-10T08:36:07.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:07 vm03 ceph-mon[50703]: Unable to set osd_memory_target on vm06 to 67384115: error parsing value: Value '67384115' is below minimum 939524096 2026-03-10T08:36:07.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:07 vm03 ceph-mon[50703]: pgmap v88: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T08:36:07.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:07 vm03 ceph-mon[50703]: from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-10T08:36:07.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:07 vm03 ceph-mon[50703]: from='osd.7 v1:192.168.123.106:6812/1491932823' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-10T08:36:07.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:07 vm03 ceph-mon[50703]: osdmap e41: 8 total, 7 up, 8 in 2026-03-10T08:36:07.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:07 vm03 ceph-mon[50703]: from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-10T08:36:07.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:07 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T08:36:07.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:07 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/197476228' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T08:36:07.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:07 vm03 ceph-mon[50703]: from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm06", "root=default"]}]': finished 2026-03-10T08:36:07.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:07 vm03 ceph-mon[50703]: osdmap e42: 8 total, 7 up, 8 in 2026-03-10T08:36:08.073 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- ceph osd stat -f json 2026-03-10T08:36:08.258 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.a/config 2026-03-10T08:36:08.381 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:08 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T08:36:08.381 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:08 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T08:36:08.381 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:08 vm03 ceph-mon[50703]: pgmap v91: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T08:36:08.381 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:08 vm03 ceph-mon[50703]: from='osd.7 ' entity='osd.7' 2026-03-10T08:36:08.381 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:08 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T08:36:08.381 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:08 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T08:36:08.381 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:08 vm03 ceph-mon[57160]: pgmap v91: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T08:36:08.381 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:08 vm03 ceph-mon[57160]: from='osd.7 ' entity='osd.7' 2026-03-10T08:36:08.494 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:36:08.543 INFO:teuthology.orchestra.run.vm03.stdout:{"epoch":43,"num_osds":8,"num_up_osds":8,"osd_up_since":1773131768,"num_in_osds":8,"osd_in_since":1773131757,"num_remapped_pgs":1} 2026-03-10T08:36:08.544 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- ceph osd dump --format=json 2026-03-10T08:36:08.589 INFO:journalctl@ceph.osd.7.vm06.stdout:Mar 10 08:36:08 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-7[74328]: 2026-03-10T08:36:08.260+0000 7f788ce0c640 -1 osd.7 0 waiting for initial osdmap 2026-03-10T08:36:08.589 INFO:journalctl@ceph.osd.7.vm06.stdout:Mar 10 08:36:08 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-7[74328]: 2026-03-10T08:36:08.271+0000 7f7887c22640 -1 osd.7 42 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T08:36:08.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:08 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T08:36:08.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:08 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T08:36:08.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:08 vm06 ceph-mon[54477]: pgmap v91: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T08:36:08.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:08 vm06 ceph-mon[54477]: from='osd.7 ' entity='osd.7' 2026-03-10T08:36:08.743 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.a/config 2026-03-10T08:36:08.981 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:36:08.981 INFO:teuthology.orchestra.run.vm03.stdout:{"epoch":43,"fsid":"aaf0329a-1c5b-11f1-8b6f-7f2d819bb543","created":"2026-03-10T08:33:50.170889+0000","modified":"2026-03-10T08:36:08.312525+0000","last_up_change":"2026-03-10T08:36:08.312525+0000","last_in_change":"2026-03-10T08:35:57.210999+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":18,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-10T08:35:14.280557+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"19","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":7.8899998664855957,"score_stable":7.8899998664855957,"optimal_score":0.37999999523162842,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"e35097a3-7591-438f-bdeb-8055d54142a8","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":42,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6801","nonce":3555379361}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6802","nonce":3555379361}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6804","nonce":3555379361}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6803","nonce":3555379361}]},"public_addr":"192.168.123.103:6801/3555379361","cluster_addr":"192.168.123.103:6802/3555379361","heartbeat_back_addr":"192.168.123.103:6804/3555379361","heartbeat_front_addr":"192.168.123.103:6803/3555379361","state":["exists","up"]},{"osd":1,"uuid":"72f3d2ae-068d-49d7-8065-95c621b425f6","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":12,"up_thru":29,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6805","nonce":129267279}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6806","nonce":129267279}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6808","nonce":129267279}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6807","nonce":129267279}]},"public_addr":"192.168.123.103:6805/129267279","cluster_addr":"192.168.123.103:6806/129267279","heartbeat_back_addr":"192.168.123.103:6808/129267279","heartbeat_front_addr":"192.168.123.103:6807/129267279","state":["exists","up"]},{"osd":2,"uuid":"ad9e1fac-53ca-411f-a676-d5c1ab5d0de6","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":16,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6809","nonce":1710778110}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6810","nonce":1710778110}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6812","nonce":1710778110}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6811","nonce":1710778110}]},"public_addr":"192.168.123.103:6809/1710778110","cluster_addr":"192.168.123.103:6810/1710778110","heartbeat_back_addr":"192.168.123.103:6812/1710778110","heartbeat_front_addr":"192.168.123.103:6811/1710778110","state":["exists","up"]},{"osd":3,"uuid":"76603192-68c1-4c39-a4c0-aa87d5f6b1cd","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":23,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6813","nonce":2974342634}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6814","nonce":2974342634}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6816","nonce":2974342634}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6815","nonce":2974342634}]},"public_addr":"192.168.123.103:6813/2974342634","cluster_addr":"192.168.123.103:6814/2974342634","heartbeat_back_addr":"192.168.123.103:6816/2974342634","heartbeat_front_addr":"192.168.123.103:6815/2974342634","state":["exists","up"]},{"osd":4,"uuid":"a828e898-4565-4b34-8d45-f78ab73d10e4","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":28,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6800","nonce":4000324195}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6801","nonce":4000324195}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6803","nonce":4000324195}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6802","nonce":4000324195}]},"public_addr":"192.168.123.106:6800/4000324195","cluster_addr":"192.168.123.106:6801/4000324195","heartbeat_back_addr":"192.168.123.106:6803/4000324195","heartbeat_front_addr":"192.168.123.106:6802/4000324195","state":["exists","up"]},{"osd":5,"uuid":"c642ac75-11a4-4a8e-9c52-98e98f045bad","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":33,"up_thru":34,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6804","nonce":74091533}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6805","nonce":74091533}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6807","nonce":74091533}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6806","nonce":74091533}]},"public_addr":"192.168.123.106:6804/74091533","cluster_addr":"192.168.123.106:6805/74091533","heartbeat_back_addr":"192.168.123.106:6807/74091533","heartbeat_front_addr":"192.168.123.106:6806/74091533","state":["exists","up"]},{"osd":6,"uuid":"2c29ba87-17ff-4e7b-aed3-65d6fd9b7afe","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":38,"up_thru":39,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6808","nonce":3799364875}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6809","nonce":3799364875}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6811","nonce":3799364875}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6810","nonce":3799364875}]},"public_addr":"192.168.123.106:6808/3799364875","cluster_addr":"192.168.123.106:6809/3799364875","heartbeat_back_addr":"192.168.123.106:6811/3799364875","heartbeat_front_addr":"192.168.123.106:6810/3799364875","state":["exists","up"]},{"osd":7,"uuid":"24d6a21c-5f7e-4d3e-b64c-8a5679e9e064","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":43,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6812","nonce":1491932823}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6813","nonce":1491932823}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6815","nonce":1491932823}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6814","nonce":1491932823}]},"public_addr":"192.168.123.106:6812/1491932823","cluster_addr":"192.168.123.106:6813/1491932823","heartbeat_back_addr":"192.168.123.106:6815/1491932823","heartbeat_front_addr":"192.168.123.106:6814/1491932823","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T08:34:47.495650+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T08:34:59.911349+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T08:35:12.363922+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T08:35:22.492831+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T08:35:32.691474+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T08:35:43.443554+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T08:35:56.215429+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[{"pgid":"1.0","osds":[0,6,1]}],"primary_temp":[],"blocklist":{"192.168.123.103:0/1658303687":"2026-03-11T08:34:10.234836+0000","192.168.123.103:0/2863700931":"2026-03-11T08:34:10.234836+0000","192.168.123.103:0/2828116415":"2026-03-11T08:34:01.295363+0000","192.168.123.103:0/1981351970":"2026-03-11T08:34:10.234836+0000","192.168.123.103:6800/3087347888":"2026-03-11T08:34:01.295363+0000","192.168.123.103:0/3889845919":"2026-03-11T08:34:01.295363+0000","192.168.123.103:6800/1917313635":"2026-03-11T08:34:10.234836+0000","192.168.123.103:0/1356345355":"2026-03-11T08:34:01.295363+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T08:36:09.059 INFO:tasks.cephadm.ceph_manager.ceph:[{'pool': 1, 'pool_name': '.mgr', 'create_time': '2026-03-10T08:35:14.280557+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'is_stretch_pool': False, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 1, 'pg_placement_num': 1, 'pg_placement_num_target': 1, 'pg_num_target': 1, 'pg_num_pending': 1, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '19', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_num_max': 32, 'pg_num_min': 1}, 'application_metadata': {'mgr': {}}, 'read_balance': {'score_type': 'Fair distribution', 'score_acting': 7.889999866485596, 'score_stable': 7.889999866485596, 'optimal_score': 0.3799999952316284, 'raw_score_acting': 3, 'raw_score_stable': 3, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}] 2026-03-10T08:36:09.059 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- ceph osd pool get .mgr pg_num 2026-03-10T08:36:09.247 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.a/config 2026-03-10T08:36:09.488 INFO:teuthology.orchestra.run.vm03.stdout:pg_num: 1 2026-03-10T08:36:09.533 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:09 vm03 ceph-mon[57160]: purged_snaps scrub starts 2026-03-10T08:36:09.533 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:09 vm03 ceph-mon[57160]: purged_snaps scrub ok 2026-03-10T08:36:09.533 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:09 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T08:36:09.533 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:09 vm03 ceph-mon[57160]: osd.7 v1:192.168.123.106:6812/1491932823 boot 2026-03-10T08:36:09.533 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:09 vm03 ceph-mon[57160]: osdmap e43: 8 total, 8 up, 8 in 2026-03-10T08:36:09.533 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:09 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T08:36:09.533 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:09 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/1690980587' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T08:36:09.533 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:09 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/1882104384' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T08:36:09.534 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:09 vm03 ceph-mon[50703]: purged_snaps scrub starts 2026-03-10T08:36:09.534 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:09 vm03 ceph-mon[50703]: purged_snaps scrub ok 2026-03-10T08:36:09.534 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:09 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T08:36:09.534 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:09 vm03 ceph-mon[50703]: osd.7 v1:192.168.123.106:6812/1491932823 boot 2026-03-10T08:36:09.534 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:09 vm03 ceph-mon[50703]: osdmap e43: 8 total, 8 up, 8 in 2026-03-10T08:36:09.534 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:09 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T08:36:09.534 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:09 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/1690980587' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T08:36:09.534 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:09 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/1882104384' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T08:36:09.567 INFO:tasks.cephadm:Adding ceph.rgw.foo.a on vm03 2026-03-10T08:36:09.567 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- ceph orch apply rgw foo.a --placement '1;vm03=foo.a' 2026-03-10T08:36:09.594 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:09 vm06 ceph-mon[54477]: purged_snaps scrub starts 2026-03-10T08:36:09.595 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:09 vm06 ceph-mon[54477]: purged_snaps scrub ok 2026-03-10T08:36:09.595 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:09 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T08:36:09.595 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:09 vm06 ceph-mon[54477]: osd.7 v1:192.168.123.106:6812/1491932823 boot 2026-03-10T08:36:09.595 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:09 vm06 ceph-mon[54477]: osdmap e43: 8 total, 8 up, 8 in 2026-03-10T08:36:09.595 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:09 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T08:36:09.595 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:09 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/1690980587' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T08:36:09.595 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:09 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/1882104384' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T08:36:09.759 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.b/config 2026-03-10T08:36:10.016 INFO:teuthology.orchestra.run.vm06.stdout:Scheduled rgw.foo.a update... 2026-03-10T08:36:10.093 DEBUG:teuthology.orchestra.run.vm03:rgw.foo.a> sudo journalctl -f -n 0 -u ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@rgw.foo.a.service 2026-03-10T08:36:10.096 INFO:tasks.cephadm:Adding ceph.iscsi.iscsi.a on vm06 2026-03-10T08:36:10.096 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- ceph osd pool create datapool 3 3 replicated 2026-03-10T08:36:10.315 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.b/config 2026-03-10T08:36:10.380 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:10 vm03 ceph-mon[50703]: osdmap e44: 8 total, 8 up, 8 in 2026-03-10T08:36:10.380 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:10 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/698035073' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-10T08:36:10.380 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:10 vm03 ceph-mon[50703]: from='client.24332 v1:192.168.123.106:0/1815422312' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo.a", "placement": "1;vm03=foo.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:36:10.380 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:10 vm03 ceph-mon[50703]: Saving service rgw.foo.a spec with placement vm03=foo.a;count:1 2026-03-10T08:36:10.380 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:10 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:10.380 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:10 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:36:10.380 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:10 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:36:10.380 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:10 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:36:10.380 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:10 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:10.380 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:10 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T08:36:10.380 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:10 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-10T08:36:10.380 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:10 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:10.380 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:10 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:36:10.380 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:10 vm03 ceph-mon[50703]: Deploying daemon rgw.foo.a on vm03 2026-03-10T08:36:10.380 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:10 vm03 ceph-mon[50703]: pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:36:10.381 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:10 vm03 ceph-mon[57160]: osdmap e44: 8 total, 8 up, 8 in 2026-03-10T08:36:10.433 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:10 vm06 ceph-mon[54477]: osdmap e44: 8 total, 8 up, 8 in 2026-03-10T08:36:10.433 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:10 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/698035073' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-10T08:36:10.433 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:10 vm06 ceph-mon[54477]: from='client.24332 v1:192.168.123.106:0/1815422312' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo.a", "placement": "1;vm03=foo.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:36:10.433 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:10 vm06 ceph-mon[54477]: Saving service rgw.foo.a spec with placement vm03=foo.a;count:1 2026-03-10T08:36:10.433 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:10 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:10.433 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:10 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:36:10.433 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:10 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:36:10.433 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:10 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:36:10.433 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:10 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:10.433 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:10 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T08:36:10.433 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:10 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-10T08:36:10.433 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:10 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:10.433 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:10 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:36:10.433 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:10 vm06 ceph-mon[54477]: Deploying daemon rgw.foo.a on vm03 2026-03-10T08:36:10.433 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:10 vm06 ceph-mon[54477]: pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:36:10.632 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:10 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/698035073' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-10T08:36:10.632 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:10 vm03 ceph-mon[57160]: from='client.24332 v1:192.168.123.106:0/1815422312' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo.a", "placement": "1;vm03=foo.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:36:10.632 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:10 vm03 ceph-mon[57160]: Saving service rgw.foo.a spec with placement vm03=foo.a;count:1 2026-03-10T08:36:10.632 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:10 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:10.632 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:10 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:36:10.632 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:10 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:36:10.632 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:10 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:36:10.632 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:10 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:10.632 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:10 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T08:36:10.632 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:10 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-10T08:36:10.632 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:10 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:10.632 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:10 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:36:10.632 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:10 vm03 ceph-mon[57160]: Deploying daemon rgw.foo.a on vm03 2026-03-10T08:36:10.632 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:10 vm03 ceph-mon[57160]: pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:36:11.178 INFO:journalctl@ceph.rgw.foo.a.vm03.stdout:Mar 10 08:36:10 vm03 systemd[1]: Started Ceph rgw.foo.a for aaf0329a-1c5b-11f1-8b6f-7f2d819bb543. 2026-03-10T08:36:11.390 INFO:teuthology.orchestra.run.vm06.stderr:pool 'datapool' created 2026-03-10T08:36:11.442 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- rbd pool init datapool 2026-03-10T08:36:11.626 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.b/config 2026-03-10T08:36:11.650 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:11 vm06 ceph-mon[54477]: osdmap e45: 8 total, 8 up, 8 in 2026-03-10T08:36:11.650 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:11 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.106:0/2736129968' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-10T08:36:11.650 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:11 vm06 ceph-mon[54477]: from='client.24352 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-10T08:36:11.650 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:11 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:11.650 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:11 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:11.650 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:11 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:11.650 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:11 vm06 ceph-mon[54477]: Saving service rgw.foo.a spec with placement vm03=foo.a;count:1 2026-03-10T08:36:11.650 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:11 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:11.650 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:11 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:11.650 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:11 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:36:11.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:11 vm03 ceph-mon[57160]: osdmap e45: 8 total, 8 up, 8 in 2026-03-10T08:36:11.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:11 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.106:0/2736129968' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-10T08:36:11.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:11 vm03 ceph-mon[57160]: from='client.24352 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-10T08:36:11.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:11 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:11.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:11 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:11.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:11 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:11.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:11 vm03 ceph-mon[57160]: Saving service rgw.foo.a spec with placement vm03=foo.a;count:1 2026-03-10T08:36:11.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:11 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:11.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:11 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:11.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:11 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:36:11.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:11 vm03 ceph-mon[50703]: osdmap e45: 8 total, 8 up, 8 in 2026-03-10T08:36:11.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:11 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.106:0/2736129968' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-10T08:36:11.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:11 vm03 ceph-mon[50703]: from='client.24352 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-10T08:36:11.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:11 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:11.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:11 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:11.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:11 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:11.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:11 vm03 ceph-mon[50703]: Saving service rgw.foo.a spec with placement vm03=foo.a;count:1 2026-03-10T08:36:11.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:11 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:11.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:11 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:11.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:11 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:36:12.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:12 vm03 ceph-mon[57160]: from='client.24352 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-10T08:36:12.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:12 vm03 ceph-mon[57160]: osdmap e46: 8 total, 8 up, 8 in 2026-03-10T08:36:12.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:12 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/3032110579' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-10T08:36:12.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:12 vm03 ceph-mon[57160]: from='client.24361 ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-10T08:36:12.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:12 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:12.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:12 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:12.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:12 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:36:12.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:12 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:36:12.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:12 vm03 ceph-mon[57160]: Checking dashboard <-> RGW credentials 2026-03-10T08:36:12.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:12 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.106:0/4107720071' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-10T08:36:12.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:12 vm03 ceph-mon[57160]: pgmap v97: 36 pgs: 35 unknown, 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:36:12.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:12 vm03 ceph-mon[57160]: from='client.24361 ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-10T08:36:12.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:12 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.106:0/4107720071' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-10T08:36:12.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:12 vm03 ceph-mon[57160]: osdmap e47: 8 total, 8 up, 8 in 2026-03-10T08:36:12.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:12 vm03 ceph-mon[50703]: from='client.24352 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-10T08:36:12.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:12 vm03 ceph-mon[50703]: osdmap e46: 8 total, 8 up, 8 in 2026-03-10T08:36:12.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:12 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/3032110579' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-10T08:36:12.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:12 vm03 ceph-mon[50703]: from='client.24361 ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-10T08:36:12.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:12 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:12.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:12 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:12.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:12 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:36:12.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:12 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:36:12.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:12 vm03 ceph-mon[50703]: Checking dashboard <-> RGW credentials 2026-03-10T08:36:12.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:12 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.106:0/4107720071' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-10T08:36:12.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:12 vm03 ceph-mon[50703]: pgmap v97: 36 pgs: 35 unknown, 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:36:12.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:12 vm03 ceph-mon[50703]: from='client.24361 ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-10T08:36:12.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:12 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.106:0/4107720071' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-10T08:36:12.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:12 vm03 ceph-mon[50703]: osdmap e47: 8 total, 8 up, 8 in 2026-03-10T08:36:12.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:12 vm06 ceph-mon[54477]: from='client.24352 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-10T08:36:12.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:12 vm06 ceph-mon[54477]: osdmap e46: 8 total, 8 up, 8 in 2026-03-10T08:36:12.840 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:12 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/3032110579' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-10T08:36:12.840 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:12 vm06 ceph-mon[54477]: from='client.24361 ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-10T08:36:12.840 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:12 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:12.840 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:12 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:12.840 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:12 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:36:12.840 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:12 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:36:12.840 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:12 vm06 ceph-mon[54477]: Checking dashboard <-> RGW credentials 2026-03-10T08:36:12.840 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:12 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.106:0/4107720071' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-10T08:36:12.840 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:12 vm06 ceph-mon[54477]: pgmap v97: 36 pgs: 35 unknown, 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:36:12.840 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:12 vm06 ceph-mon[54477]: from='client.24361 ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-10T08:36:12.840 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:12 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.106:0/4107720071' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-10T08:36:12.840 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:12 vm06 ceph-mon[54477]: osdmap e47: 8 total, 8 up, 8 in 2026-03-10T08:36:13.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:13 vm03 ceph-mon[57160]: Health check failed: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:36:13.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:13 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:13.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:13 vm03 ceph-mon[57160]: osdmap e48: 8 total, 8 up, 8 in 2026-03-10T08:36:13.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:13 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/2266188634' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-10T08:36:13.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:13 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/353183714' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-10T08:36:13.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:13 vm03 ceph-mon[57160]: from='client.24385 ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-10T08:36:13.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:13 vm03 ceph-mon[57160]: from='client.24383 ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-10T08:36:13.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:13 vm03 ceph-mon[50703]: Health check failed: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:36:13.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:13 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:13.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:13 vm03 ceph-mon[50703]: osdmap e48: 8 total, 8 up, 8 in 2026-03-10T08:36:13.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:13 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/2266188634' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-10T08:36:13.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:13 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/353183714' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-10T08:36:13.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:13 vm03 ceph-mon[50703]: from='client.24385 ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-10T08:36:13.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:13 vm03 ceph-mon[50703]: from='client.24383 ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-10T08:36:13.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:13 vm06 ceph-mon[54477]: Health check failed: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:36:13.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:13 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:13.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:13 vm06 ceph-mon[54477]: osdmap e48: 8 total, 8 up, 8 in 2026-03-10T08:36:13.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:13 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/2266188634' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-10T08:36:13.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:13 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/353183714' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-10T08:36:13.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:13 vm06 ceph-mon[54477]: from='client.24385 ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-10T08:36:13.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:13 vm06 ceph-mon[54477]: from='client.24383 ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-10T08:36:14.483 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- ceph orch apply iscsi datapool admin admin --trusted_ip_list 192.168.123.106 --placement '1;vm06=iscsi.a' 2026-03-10T08:36:14.671 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.b/config 2026-03-10T08:36:14.697 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:14 vm06 ceph-mon[54477]: pgmap v100: 68 pgs: 1 creating+peering, 60 unknown, 7 active+clean; 450 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 2.0 KiB/s rd, 767 B/s wr, 3 op/s 2026-03-10T08:36:14.698 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:14 vm06 ceph-mon[54477]: from='client.24385 ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-10T08:36:14.698 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:14 vm06 ceph-mon[54477]: from='client.24383 ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-10T08:36:14.698 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:14 vm06 ceph-mon[54477]: osdmap e49: 8 total, 8 up, 8 in 2026-03-10T08:36:14.927 INFO:teuthology.orchestra.run.vm06.stdout:Scheduled iscsi.datapool update... 2026-03-10T08:36:14.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:14 vm03 ceph-mon[57160]: pgmap v100: 68 pgs: 1 creating+peering, 60 unknown, 7 active+clean; 450 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 2.0 KiB/s rd, 767 B/s wr, 3 op/s 2026-03-10T08:36:14.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:14 vm03 ceph-mon[57160]: from='client.24385 ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-10T08:36:14.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:14 vm03 ceph-mon[57160]: from='client.24383 ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-10T08:36:14.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:14 vm03 ceph-mon[57160]: osdmap e49: 8 total, 8 up, 8 in 2026-03-10T08:36:14.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:14 vm03 ceph-mon[50703]: pgmap v100: 68 pgs: 1 creating+peering, 60 unknown, 7 active+clean; 450 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 2.0 KiB/s rd, 767 B/s wr, 3 op/s 2026-03-10T08:36:14.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:14 vm03 ceph-mon[50703]: from='client.24385 ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-10T08:36:14.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:14 vm03 ceph-mon[50703]: from='client.24383 ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-10T08:36:14.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:14 vm03 ceph-mon[50703]: osdmap e49: 8 total, 8 up, 8 in 2026-03-10T08:36:15.004 INFO:tasks.cephadm:Distributing iscsi-gateway.cfg... 2026-03-10T08:36:15.004 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-10T08:36:15.004 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/etc/ceph/iscsi-gateway.cfg 2026-03-10T08:36:15.030 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-10T08:36:15.030 DEBUG:teuthology.orchestra.run.vm06:> sudo dd of=/etc/ceph/iscsi-gateway.cfg 2026-03-10T08:36:15.059 DEBUG:teuthology.orchestra.run.vm06:iscsi.iscsi.a> sudo journalctl -f -n 0 -u ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@iscsi.iscsi.a.service 2026-03-10T08:36:15.100 INFO:tasks.cephadm:Adding prometheus.a on vm06 2026-03-10T08:36:15.100 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- ceph orch apply prometheus '1;vm06=a' 2026-03-10T08:36:15.319 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.b/config 2026-03-10T08:36:15.582 INFO:teuthology.orchestra.run.vm06.stdout:Scheduled prometheus update... 2026-03-10T08:36:15.633 DEBUG:teuthology.orchestra.run.vm06:prometheus.a> sudo journalctl -f -n 0 -u ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@prometheus.a.service 2026-03-10T08:36:15.635 INFO:tasks.cephadm:Adding node-exporter.a on vm03 2026-03-10T08:36:15.635 INFO:tasks.cephadm:Adding node-exporter.b on vm06 2026-03-10T08:36:15.635 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- ceph orch apply node-exporter '2;vm03=a;vm06=b' 2026-03-10T08:36:15.875 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.b/config 2026-03-10T08:36:16.151 INFO:teuthology.orchestra.run.vm06.stdout:Scheduled node-exporter update... 2026-03-10T08:36:16.173 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:15 vm06 ceph-mon[54477]: from='client.24403 v1:192.168.123.106:0/1988032024' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.106", "placement": "1;vm06=iscsi.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:36:16.173 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:15 vm06 ceph-mon[54477]: Saving service iscsi.datapool spec with placement vm06=iscsi.a;count:1 2026-03-10T08:36:16.173 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:15 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:16.173 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:15 vm06 ceph-mon[54477]: osdmap e50: 8 total, 8 up, 8 in 2026-03-10T08:36:16.173 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:15 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/353183714' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-10T08:36:16.173 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:15 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/2266188634' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-10T08:36:16.173 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:15 vm06 ceph-mon[54477]: from='client.24383 ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-10T08:36:16.173 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:15 vm06 ceph-mon[54477]: from='client.24385 ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-10T08:36:16.174 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:15 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:16.197 DEBUG:teuthology.orchestra.run.vm03:node-exporter.a> sudo journalctl -f -n 0 -u ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@node-exporter.a.service 2026-03-10T08:36:16.199 DEBUG:teuthology.orchestra.run.vm06:node-exporter.b> sudo journalctl -f -n 0 -u ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@node-exporter.b.service 2026-03-10T08:36:16.200 INFO:tasks.cephadm:Adding alertmanager.a on vm03 2026-03-10T08:36:16.200 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- ceph orch apply alertmanager '1;vm03=a' 2026-03-10T08:36:16.220 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:15 vm03 ceph-mon[50703]: from='client.24403 v1:192.168.123.106:0/1988032024' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.106", "placement": "1;vm06=iscsi.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:36:16.220 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:15 vm03 ceph-mon[50703]: Saving service iscsi.datapool spec with placement vm06=iscsi.a;count:1 2026-03-10T08:36:16.220 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:15 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:16.220 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:15 vm03 ceph-mon[50703]: osdmap e50: 8 total, 8 up, 8 in 2026-03-10T08:36:16.220 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:15 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/353183714' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-10T08:36:16.220 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:15 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/2266188634' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-10T08:36:16.220 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:15 vm03 ceph-mon[50703]: from='client.24383 ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-10T08:36:16.220 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:15 vm03 ceph-mon[50703]: from='client.24385 ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-10T08:36:16.220 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:15 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:16.220 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:15 vm03 ceph-mon[57160]: from='client.24403 v1:192.168.123.106:0/1988032024' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.106", "placement": "1;vm06=iscsi.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:36:16.220 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:15 vm03 ceph-mon[57160]: Saving service iscsi.datapool spec with placement vm06=iscsi.a;count:1 2026-03-10T08:36:16.220 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:15 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:16.220 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:15 vm03 ceph-mon[57160]: osdmap e50: 8 total, 8 up, 8 in 2026-03-10T08:36:16.220 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:15 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/353183714' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-10T08:36:16.220 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:15 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/2266188634' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-10T08:36:16.220 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:15 vm03 ceph-mon[57160]: from='client.24383 ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-10T08:36:16.220 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:15 vm03 ceph-mon[57160]: from='client.24385 ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-10T08:36:16.220 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:15 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:16.453 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.b/config 2026-03-10T08:36:16.717 INFO:teuthology.orchestra.run.vm06.stdout:Scheduled alertmanager update... 2026-03-10T08:36:16.793 DEBUG:teuthology.orchestra.run.vm03:alertmanager.a> sudo journalctl -f -n 0 -u ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@alertmanager.a.service 2026-03-10T08:36:16.795 INFO:tasks.cephadm:Adding grafana.a on vm06 2026-03-10T08:36:16.795 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- ceph orch apply grafana '1;vm06=a' 2026-03-10T08:36:16.993 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.b/config 2026-03-10T08:36:17.152 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:17 vm03 ceph-mon[50703]: from='client.24409 v1:192.168.123.106:0/4243946468' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm06=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:36:17.152 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:17 vm03 ceph-mon[50703]: Saving service prometheus spec with placement vm06=a;count:1 2026-03-10T08:36:17.152 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:17 vm03 ceph-mon[50703]: from='client.14520 v1:192.168.123.106:0/231338361' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm03=a;vm06=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:36:17.152 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:17 vm03 ceph-mon[50703]: Saving service node-exporter spec with placement vm03=a;vm06=b;count:2 2026-03-10T08:36:17.152 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:17 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:17.152 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:17 vm03 ceph-mon[57160]: from='client.24409 v1:192.168.123.106:0/4243946468' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm06=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:36:17.153 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:17 vm03 ceph-mon[57160]: Saving service prometheus spec with placement vm06=a;count:1 2026-03-10T08:36:17.153 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:17 vm03 ceph-mon[57160]: from='client.14520 v1:192.168.123.106:0/231338361' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm03=a;vm06=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:36:17.153 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:17 vm03 ceph-mon[57160]: Saving service node-exporter spec with placement vm03=a;vm06=b;count:2 2026-03-10T08:36:17.153 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:17 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:17.153 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:17 vm03 ceph-mon[57160]: pgmap v103: 100 pgs: 6 creating+peering, 48 unknown, 46 active+clean; 450 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 3.5 KiB/s rd, 2.0 KiB/s wr, 6 op/s 2026-03-10T08:36:17.153 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:17 vm03 ceph-mon[57160]: from='client.24383 ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-10T08:36:17.153 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:17 vm03 ceph-mon[57160]: from='client.24385 ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-10T08:36:17.153 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:17 vm03 ceph-mon[57160]: osdmap e51: 8 total, 8 up, 8 in 2026-03-10T08:36:17.153 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:17 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:17.258 INFO:teuthology.orchestra.run.vm06.stdout:Scheduled grafana update... 2026-03-10T08:36:17.286 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:17 vm06 ceph-mon[54477]: from='client.24409 v1:192.168.123.106:0/4243946468' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm06=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:36:17.287 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:17 vm06 ceph-mon[54477]: Saving service prometheus spec with placement vm06=a;count:1 2026-03-10T08:36:17.287 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:17 vm06 ceph-mon[54477]: from='client.14520 v1:192.168.123.106:0/231338361' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm03=a;vm06=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:36:17.287 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:17 vm06 ceph-mon[54477]: Saving service node-exporter spec with placement vm03=a;vm06=b;count:2 2026-03-10T08:36:17.287 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:17 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:17.287 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:17 vm06 ceph-mon[54477]: pgmap v103: 100 pgs: 6 creating+peering, 48 unknown, 46 active+clean; 450 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 3.5 KiB/s rd, 2.0 KiB/s wr, 6 op/s 2026-03-10T08:36:17.287 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:17 vm06 ceph-mon[54477]: from='client.24383 ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-10T08:36:17.287 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:17 vm06 ceph-mon[54477]: from='client.24385 ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-10T08:36:17.287 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:17 vm06 ceph-mon[54477]: osdmap e51: 8 total, 8 up, 8 in 2026-03-10T08:36:17.287 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:17 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:17.324 DEBUG:teuthology.orchestra.run.vm06:grafana.a> sudo journalctl -f -n 0 -u ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@grafana.a.service 2026-03-10T08:36:17.326 INFO:tasks.cephadm:Setting up client nodes... 2026-03-10T08:36:17.326 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- ceph auth get-or-create client.0 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-10T08:36:17.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:17 vm03 ceph-mon[50703]: pgmap v103: 100 pgs: 6 creating+peering, 48 unknown, 46 active+clean; 450 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 3.5 KiB/s rd, 2.0 KiB/s wr, 6 op/s 2026-03-10T08:36:17.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:17 vm03 ceph-mon[50703]: from='client.24383 ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-10T08:36:17.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:17 vm03 ceph-mon[50703]: from='client.24385 ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-10T08:36:17.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:17 vm03 ceph-mon[50703]: osdmap e51: 8 total, 8 up, 8 in 2026-03-10T08:36:17.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:17 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:17.536 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.a/config 2026-03-10T08:36:17.814 INFO:teuthology.orchestra.run.vm03.stdout:[client.0] 2026-03-10T08:36:17.814 INFO:teuthology.orchestra.run.vm03.stdout: key = AQAB2K9pW1MQMBAA1OaPQWFw8nUnYh36qvuDhQ== 2026-03-10T08:36:17.871 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-10T08:36:17.872 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/etc/ceph/ceph.client.0.keyring 2026-03-10T08:36:17.872 DEBUG:teuthology.orchestra.run.vm03:> sudo chmod 0644 /etc/ceph/ceph.client.0.keyring 2026-03-10T08:36:17.910 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- ceph auth get-or-create client.1 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-10T08:36:18.098 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.b/config 2026-03-10T08:36:18.370 INFO:teuthology.orchestra.run.vm06.stdout:[client.1] 2026-03-10T08:36:18.370 INFO:teuthology.orchestra.run.vm06.stdout: key = AQAC2K9pYcPSFRAAyg1OwagrknGc1H2vO1FEHg== 2026-03-10T08:36:18.452 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-10T08:36:18.452 DEBUG:teuthology.orchestra.run.vm06:> sudo dd of=/etc/ceph/ceph.client.1.keyring 2026-03-10T08:36:18.452 DEBUG:teuthology.orchestra.run.vm06:> sudo chmod 0644 /etc/ceph/ceph.client.1.keyring 2026-03-10T08:36:18.476 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:18 vm06 ceph-mon[54477]: from='client.24421 v1:192.168.123.106:0/3511055590' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "alertmanager", "placement": "1;vm03=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:36:18.476 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:18 vm06 ceph-mon[54477]: Saving service alertmanager spec with placement vm03=a;count:1 2026-03-10T08:36:18.476 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:18 vm06 ceph-mon[54477]: from='client.24427 v1:192.168.123.106:0/886690471' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm06=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:36:18.476 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:18 vm06 ceph-mon[54477]: Saving service grafana spec with placement vm06=a;count:1 2026-03-10T08:36:18.476 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:18 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:18.476 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:18 vm06 ceph-mon[54477]: osdmap e52: 8 total, 8 up, 8 in 2026-03-10T08:36:18.476 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:18 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/353183714' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T08:36:18.476 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:18 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/2266188634' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T08:36:18.476 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:18 vm06 ceph-mon[54477]: from='client.24385 ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T08:36:18.476 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:18 vm06 ceph-mon[54477]: from='client.24383 ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T08:36:18.476 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:18 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/3666903302' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T08:36:18.476 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:18 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/3666903302' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T08:36:18.490 INFO:tasks.ceph:Waiting until ceph daemons up and pgs clean... 2026-03-10T08:36:18.490 INFO:tasks.cephadm.ceph_manager.ceph:waiting for mgr available 2026-03-10T08:36:18.490 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- ceph mgr dump --format=json 2026-03-10T08:36:18.511 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:18 vm03 ceph-mon[57160]: from='client.24421 v1:192.168.123.106:0/3511055590' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "alertmanager", "placement": "1;vm03=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:36:18.511 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:18 vm03 ceph-mon[57160]: Saving service alertmanager spec with placement vm03=a;count:1 2026-03-10T08:36:18.511 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:18 vm03 ceph-mon[57160]: from='client.24427 v1:192.168.123.106:0/886690471' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm06=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:36:18.511 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:18 vm03 ceph-mon[57160]: Saving service grafana spec with placement vm06=a;count:1 2026-03-10T08:36:18.511 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:18 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:18.511 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:18 vm03 ceph-mon[57160]: osdmap e52: 8 total, 8 up, 8 in 2026-03-10T08:36:18.511 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:18 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/353183714' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T08:36:18.511 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:18 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/2266188634' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T08:36:18.511 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:18 vm03 ceph-mon[57160]: from='client.24385 ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T08:36:18.511 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:18 vm03 ceph-mon[57160]: from='client.24383 ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T08:36:18.511 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:18 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/3666903302' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T08:36:18.511 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:18 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/3666903302' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T08:36:18.511 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:18 vm03 ceph-mon[50703]: from='client.24421 v1:192.168.123.106:0/3511055590' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "alertmanager", "placement": "1;vm03=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:36:18.511 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:18 vm03 ceph-mon[50703]: Saving service alertmanager spec with placement vm03=a;count:1 2026-03-10T08:36:18.511 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:18 vm03 ceph-mon[50703]: from='client.24427 v1:192.168.123.106:0/886690471' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm06=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T08:36:18.511 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:18 vm03 ceph-mon[50703]: Saving service grafana spec with placement vm06=a;count:1 2026-03-10T08:36:18.511 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:18 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:18.511 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:18 vm03 ceph-mon[50703]: osdmap e52: 8 total, 8 up, 8 in 2026-03-10T08:36:18.511 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:18 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/353183714' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T08:36:18.511 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:18 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/2266188634' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T08:36:18.511 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:18 vm03 ceph-mon[50703]: from='client.24385 ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T08:36:18.511 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:18 vm03 ceph-mon[50703]: from='client.24383 ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T08:36:18.511 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:18 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/3666903302' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T08:36:18.511 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:18 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/3666903302' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T08:36:18.682 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.a/config 2026-03-10T08:36:18.965 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:36:19.037 INFO:teuthology.orchestra.run.vm03.stdout:{"epoch":15,"flags":0,"active_gid":14150,"active_name":"y","active_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6800","nonce":1817162249}]},"active_addr":"192.168.123.103:6800/1817162249","active_change":"2026-03-10T08:34:10.234916+0000","active_mgr_features":4540701547738038271,"available":true,"standbys":[{"gid":24124,"name":"x","mgr_features":4540701547738038271,"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:10.4.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.7.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.2.5","min":"","max":"","enum_allowed":[],"desc":"Nvme-of container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.51.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:devbuilds-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba/SMB container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_requests":{"name":"max_requests","type":"int","level":"advanced","flags":0,"default_value":"500","min":"","max":"","enum_allowed":[],"desc":"Maximum number of requests to keep in memory. When new request comes in, the oldest request will be removed if the number of requests exceeds the max request number. if un-finished request is removed, error message will be logged in the ceph-mgr log.","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}]}],"modules":["cephadm","dashboard","iostat","nfs","restful"],"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:10.4.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.7.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.2.5","min":"","max":"","enum_allowed":[],"desc":"Nvme-of container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.51.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:devbuilds-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba/SMB container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_requests":{"name":"max_requests","type":"int","level":"advanced","flags":0,"default_value":"500","min":"","max":"","enum_allowed":[],"desc":"Maximum number of requests to keep in memory. When new request comes in, the oldest request will be removed if the number of requests exceeds the max request number. if un-finished request is removed, error message will be logged in the ceph-mgr log.","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}],"services":{"dashboard":"https://192.168.123.103:8443/"},"always_on_modules":{"octopus":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"pacific":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"quincy":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"reef":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"squid":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"]},"force_disabled_modules":{},"last_failure_osd_epoch":3,"active_clients":[{"name":"devicehealth","addrvec":[{"type":"v2","addr":"192.168.123.103:0","nonce":1895021634}]},{"name":"libcephsqlite","addrvec":[{"type":"v2","addr":"192.168.123.103:0","nonce":4121299624}]},{"name":"rbd_support","addrvec":[{"type":"v2","addr":"192.168.123.103:0","nonce":1882814120}]},{"name":"volumes","addrvec":[{"type":"v2","addr":"192.168.123.103:0","nonce":407002187}]}]} 2026-03-10T08:36:19.039 INFO:tasks.cephadm.ceph_manager.ceph:mgr available! 2026-03-10T08:36:19.039 INFO:tasks.cephadm.ceph_manager.ceph:waiting for all up 2026-03-10T08:36:19.039 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- ceph osd dump --format=json 2026-03-10T08:36:19.264 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.a/config 2026-03-10T08:36:19.270 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:19 vm03 ceph-mon[50703]: pgmap v106: 132 pgs: 13 creating+peering, 45 unknown, 74 active+clean; 450 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 3.0 KiB/s rd, 1.7 KiB/s wr, 6 op/s 2026-03-10T08:36:19.270 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:19 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.106:0/430390971' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T08:36:19.270 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:19 vm03 ceph-mon[50703]: from='client.24439 ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T08:36:19.270 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:19 vm03 ceph-mon[50703]: from='client.24439 ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T08:36:19.270 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:19 vm03 ceph-mon[50703]: from='client.24385 ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-10T08:36:19.270 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:19 vm03 ceph-mon[50703]: from='client.24383 ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-10T08:36:19.270 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:19 vm03 ceph-mon[50703]: osdmap e53: 8 total, 8 up, 8 in 2026-03-10T08:36:19.270 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:19 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/353183714' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T08:36:19.270 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:19 vm03 ceph-mon[50703]: from='client.24383 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T08:36:19.270 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:19 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/2266188634' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T08:36:19.270 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:19 vm03 ceph-mon[50703]: from='client.24385 ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T08:36:19.270 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:19 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/237916972' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T08:36:19.270 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:19 vm03 ceph-mon[57160]: pgmap v106: 132 pgs: 13 creating+peering, 45 unknown, 74 active+clean; 450 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 3.0 KiB/s rd, 1.7 KiB/s wr, 6 op/s 2026-03-10T08:36:19.271 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:19 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.106:0/430390971' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T08:36:19.271 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:19 vm03 ceph-mon[57160]: from='client.24439 ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T08:36:19.271 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:19 vm03 ceph-mon[57160]: from='client.24439 ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T08:36:19.271 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:19 vm03 ceph-mon[57160]: from='client.24385 ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-10T08:36:19.271 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:19 vm03 ceph-mon[57160]: from='client.24383 ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-10T08:36:19.271 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:19 vm03 ceph-mon[57160]: osdmap e53: 8 total, 8 up, 8 in 2026-03-10T08:36:19.271 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:19 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/353183714' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T08:36:19.271 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:19 vm03 ceph-mon[57160]: from='client.24383 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T08:36:19.271 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:19 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/2266188634' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T08:36:19.271 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:19 vm03 ceph-mon[57160]: from='client.24385 ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T08:36:19.271 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:19 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/237916972' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T08:36:19.551 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:36:19.552 INFO:teuthology.orchestra.run.vm03.stdout:{"epoch":54,"fsid":"aaf0329a-1c5b-11f1-8b6f-7f2d819bb543","created":"2026-03-10T08:33:50.170889+0000","modified":"2026-03-10T08:36:19.401815+0000","last_up_change":"2026-03-10T08:36:08.312525+0000","last_in_change":"2026-03-10T08:35:57.210999+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":18,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":6,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-10T08:35:14.280557+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"19","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":7.8899998664855957,"score_stable":7.8899998664855957,"optimal_score":0.37999999523162842,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":2,"pool_name":"datapool","create_time":"2026-03-10T08:36:10.561401+0000","flags":8193,"flags_names":"hashpspool,selfmanaged_snaps","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":3,"pg_placement_num":3,"pg_placement_num_target":3,"pg_num_target":3,"pg_num_pending":3,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"49","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":2,"snap_epoch":49,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rbd":{}},"read_balance":{"score_type":"Fair distribution","score_acting":2.6500000953674316,"score_stable":2.6500000953674316,"optimal_score":0.87999999523162842,"raw_score_acting":2.3299999237060547,"raw_score_stable":2.3299999237060547,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":3,"pool_name":".rgw.root","create_time":"2026-03-10T08:36:10.939710+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"48","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.5,"score_stable":1.5,"optimal_score":1,"raw_score_acting":1.5,"raw_score_stable":1.5,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":4,"pool_name":"default.rgw.log","create_time":"2026-03-10T08:36:12.497245+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"50","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":2.25,"score_stable":2.25,"optimal_score":1,"raw_score_acting":2.25,"raw_score_stable":2.25,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":5,"pool_name":"default.rgw.control","create_time":"2026-03-10T08:36:14.426042+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"52","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.25,"score_stable":1.25,"optimal_score":1,"raw_score_acting":1.25,"raw_score_stable":1.25,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":6,"pool_name":"default.rgw.meta","create_time":"2026-03-10T08:36:16.551140+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"54","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_autoscale_bias":4},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.75,"score_stable":1.75,"optimal_score":1,"raw_score_acting":1.75,"raw_score_stable":1.75,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"e35097a3-7591-438f-bdeb-8055d54142a8","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":52,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6801","nonce":3555379361}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6802","nonce":3555379361}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6804","nonce":3555379361}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6803","nonce":3555379361}]},"public_addr":"192.168.123.103:6801/3555379361","cluster_addr":"192.168.123.103:6802/3555379361","heartbeat_back_addr":"192.168.123.103:6804/3555379361","heartbeat_front_addr":"192.168.123.103:6803/3555379361","state":["exists","up"]},{"osd":1,"uuid":"72f3d2ae-068d-49d7-8065-95c621b425f6","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":12,"up_thru":52,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6805","nonce":129267279}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6806","nonce":129267279}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6808","nonce":129267279}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6807","nonce":129267279}]},"public_addr":"192.168.123.103:6805/129267279","cluster_addr":"192.168.123.103:6806/129267279","heartbeat_back_addr":"192.168.123.103:6808/129267279","heartbeat_front_addr":"192.168.123.103:6807/129267279","state":["exists","up"]},{"osd":2,"uuid":"ad9e1fac-53ca-411f-a676-d5c1ab5d0de6","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":16,"up_thru":52,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6809","nonce":1710778110}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6810","nonce":1710778110}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6812","nonce":1710778110}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6811","nonce":1710778110}]},"public_addr":"192.168.123.103:6809/1710778110","cluster_addr":"192.168.123.103:6810/1710778110","heartbeat_back_addr":"192.168.123.103:6812/1710778110","heartbeat_front_addr":"192.168.123.103:6811/1710778110","state":["exists","up"]},{"osd":3,"uuid":"76603192-68c1-4c39-a4c0-aa87d5f6b1cd","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":23,"up_thru":52,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6813","nonce":2974342634}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6814","nonce":2974342634}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6816","nonce":2974342634}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6815","nonce":2974342634}]},"public_addr":"192.168.123.103:6813/2974342634","cluster_addr":"192.168.123.103:6814/2974342634","heartbeat_back_addr":"192.168.123.103:6816/2974342634","heartbeat_front_addr":"192.168.123.103:6815/2974342634","state":["exists","up"]},{"osd":4,"uuid":"a828e898-4565-4b34-8d45-f78ab73d10e4","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":28,"up_thru":52,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6800","nonce":4000324195}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6801","nonce":4000324195}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6803","nonce":4000324195}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6802","nonce":4000324195}]},"public_addr":"192.168.123.106:6800/4000324195","cluster_addr":"192.168.123.106:6801/4000324195","heartbeat_back_addr":"192.168.123.106:6803/4000324195","heartbeat_front_addr":"192.168.123.106:6802/4000324195","state":["exists","up"]},{"osd":5,"uuid":"c642ac75-11a4-4a8e-9c52-98e98f045bad","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":33,"up_thru":52,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6804","nonce":74091533}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6805","nonce":74091533}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6807","nonce":74091533}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6806","nonce":74091533}]},"public_addr":"192.168.123.106:6804/74091533","cluster_addr":"192.168.123.106:6805/74091533","heartbeat_back_addr":"192.168.123.106:6807/74091533","heartbeat_front_addr":"192.168.123.106:6806/74091533","state":["exists","up"]},{"osd":6,"uuid":"2c29ba87-17ff-4e7b-aed3-65d6fd9b7afe","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":38,"up_thru":50,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6808","nonce":3799364875}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6809","nonce":3799364875}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6811","nonce":3799364875}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6810","nonce":3799364875}]},"public_addr":"192.168.123.106:6808/3799364875","cluster_addr":"192.168.123.106:6809/3799364875","heartbeat_back_addr":"192.168.123.106:6811/3799364875","heartbeat_front_addr":"192.168.123.106:6810/3799364875","state":["exists","up"]},{"osd":7,"uuid":"24d6a21c-5f7e-4d3e-b64c-8a5679e9e064","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":43,"up_thru":52,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6812","nonce":1491932823}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6813","nonce":1491932823}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6815","nonce":1491932823}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6814","nonce":1491932823}]},"public_addr":"192.168.123.106:6812/1491932823","cluster_addr":"192.168.123.106:6813/1491932823","heartbeat_back_addr":"192.168.123.106:6815/1491932823","heartbeat_front_addr":"192.168.123.106:6814/1491932823","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T08:34:47.495650+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T08:34:59.911349+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T08:35:12.363922+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T08:35:22.492831+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T08:35:32.691474+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T08:35:43.443554+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T08:35:56.215429+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T08:36:06.618270+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.103:0/1658303687":"2026-03-11T08:34:10.234836+0000","192.168.123.103:0/2863700931":"2026-03-11T08:34:10.234836+0000","192.168.123.103:0/2828116415":"2026-03-11T08:34:01.295363+0000","192.168.123.103:0/1981351970":"2026-03-11T08:34:10.234836+0000","192.168.123.103:6800/3087347888":"2026-03-11T08:34:01.295363+0000","192.168.123.103:0/3889845919":"2026-03-11T08:34:01.295363+0000","192.168.123.103:6800/1917313635":"2026-03-11T08:34:10.234836+0000","192.168.123.103:0/1356345355":"2026-03-11T08:34:01.295363+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[{"pool":2,"snaps":[{"begin":2,"length":1}]}],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T08:36:19.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:19 vm06 ceph-mon[54477]: pgmap v106: 132 pgs: 13 creating+peering, 45 unknown, 74 active+clean; 450 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 3.0 KiB/s rd, 1.7 KiB/s wr, 6 op/s 2026-03-10T08:36:19.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:19 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.106:0/430390971' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T08:36:19.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:19 vm06 ceph-mon[54477]: from='client.24439 ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T08:36:19.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:19 vm06 ceph-mon[54477]: from='client.24439 ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T08:36:19.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:19 vm06 ceph-mon[54477]: from='client.24385 ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-10T08:36:19.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:19 vm06 ceph-mon[54477]: from='client.24383 ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-10T08:36:19.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:19 vm06 ceph-mon[54477]: osdmap e53: 8 total, 8 up, 8 in 2026-03-10T08:36:19.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:19 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/353183714' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T08:36:19.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:19 vm06 ceph-mon[54477]: from='client.24383 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T08:36:19.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:19 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/2266188634' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T08:36:19.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:19 vm06 ceph-mon[54477]: from='client.24385 ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T08:36:19.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:19 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/237916972' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T08:36:19.627 INFO:tasks.cephadm.ceph_manager.ceph:all up! 2026-03-10T08:36:19.627 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- ceph osd dump --format=json 2026-03-10T08:36:19.896 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.a/config 2026-03-10T08:36:19.929 INFO:journalctl@ceph.rgw.foo.a.vm03.stdout:Mar 10 08:36:19 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-rgw-foo-a[80324]: 2026-03-10T08:36:19.512+0000 7f126d2c6980 -1 LDAP not started since no server URIs were provided in the configuration. 2026-03-10T08:36:20.173 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:36:20.173 INFO:teuthology.orchestra.run.vm03.stdout:{"epoch":54,"fsid":"aaf0329a-1c5b-11f1-8b6f-7f2d819bb543","created":"2026-03-10T08:33:50.170889+0000","modified":"2026-03-10T08:36:19.401815+0000","last_up_change":"2026-03-10T08:36:08.312525+0000","last_in_change":"2026-03-10T08:35:57.210999+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":18,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":6,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-10T08:35:14.280557+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"19","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":7.8899998664855957,"score_stable":7.8899998664855957,"optimal_score":0.37999999523162842,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":2,"pool_name":"datapool","create_time":"2026-03-10T08:36:10.561401+0000","flags":8193,"flags_names":"hashpspool,selfmanaged_snaps","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":3,"pg_placement_num":3,"pg_placement_num_target":3,"pg_num_target":3,"pg_num_pending":3,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"49","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":2,"snap_epoch":49,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rbd":{}},"read_balance":{"score_type":"Fair distribution","score_acting":2.6500000953674316,"score_stable":2.6500000953674316,"optimal_score":0.87999999523162842,"raw_score_acting":2.3299999237060547,"raw_score_stable":2.3299999237060547,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":3,"pool_name":".rgw.root","create_time":"2026-03-10T08:36:10.939710+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"48","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.5,"score_stable":1.5,"optimal_score":1,"raw_score_acting":1.5,"raw_score_stable":1.5,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":4,"pool_name":"default.rgw.log","create_time":"2026-03-10T08:36:12.497245+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"50","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":2.25,"score_stable":2.25,"optimal_score":1,"raw_score_acting":2.25,"raw_score_stable":2.25,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":5,"pool_name":"default.rgw.control","create_time":"2026-03-10T08:36:14.426042+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"52","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.25,"score_stable":1.25,"optimal_score":1,"raw_score_acting":1.25,"raw_score_stable":1.25,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":6,"pool_name":"default.rgw.meta","create_time":"2026-03-10T08:36:16.551140+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"54","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_autoscale_bias":4},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.75,"score_stable":1.75,"optimal_score":1,"raw_score_acting":1.75,"raw_score_stable":1.75,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"e35097a3-7591-438f-bdeb-8055d54142a8","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":52,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6801","nonce":3555379361}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6802","nonce":3555379361}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6804","nonce":3555379361}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6803","nonce":3555379361}]},"public_addr":"192.168.123.103:6801/3555379361","cluster_addr":"192.168.123.103:6802/3555379361","heartbeat_back_addr":"192.168.123.103:6804/3555379361","heartbeat_front_addr":"192.168.123.103:6803/3555379361","state":["exists","up"]},{"osd":1,"uuid":"72f3d2ae-068d-49d7-8065-95c621b425f6","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":12,"up_thru":52,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6805","nonce":129267279}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6806","nonce":129267279}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6808","nonce":129267279}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6807","nonce":129267279}]},"public_addr":"192.168.123.103:6805/129267279","cluster_addr":"192.168.123.103:6806/129267279","heartbeat_back_addr":"192.168.123.103:6808/129267279","heartbeat_front_addr":"192.168.123.103:6807/129267279","state":["exists","up"]},{"osd":2,"uuid":"ad9e1fac-53ca-411f-a676-d5c1ab5d0de6","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":16,"up_thru":52,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6809","nonce":1710778110}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6810","nonce":1710778110}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6812","nonce":1710778110}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6811","nonce":1710778110}]},"public_addr":"192.168.123.103:6809/1710778110","cluster_addr":"192.168.123.103:6810/1710778110","heartbeat_back_addr":"192.168.123.103:6812/1710778110","heartbeat_front_addr":"192.168.123.103:6811/1710778110","state":["exists","up"]},{"osd":3,"uuid":"76603192-68c1-4c39-a4c0-aa87d5f6b1cd","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":23,"up_thru":52,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6813","nonce":2974342634}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6814","nonce":2974342634}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6816","nonce":2974342634}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6815","nonce":2974342634}]},"public_addr":"192.168.123.103:6813/2974342634","cluster_addr":"192.168.123.103:6814/2974342634","heartbeat_back_addr":"192.168.123.103:6816/2974342634","heartbeat_front_addr":"192.168.123.103:6815/2974342634","state":["exists","up"]},{"osd":4,"uuid":"a828e898-4565-4b34-8d45-f78ab73d10e4","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":28,"up_thru":52,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6800","nonce":4000324195}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6801","nonce":4000324195}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6803","nonce":4000324195}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6802","nonce":4000324195}]},"public_addr":"192.168.123.106:6800/4000324195","cluster_addr":"192.168.123.106:6801/4000324195","heartbeat_back_addr":"192.168.123.106:6803/4000324195","heartbeat_front_addr":"192.168.123.106:6802/4000324195","state":["exists","up"]},{"osd":5,"uuid":"c642ac75-11a4-4a8e-9c52-98e98f045bad","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":33,"up_thru":52,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6804","nonce":74091533}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6805","nonce":74091533}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6807","nonce":74091533}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6806","nonce":74091533}]},"public_addr":"192.168.123.106:6804/74091533","cluster_addr":"192.168.123.106:6805/74091533","heartbeat_back_addr":"192.168.123.106:6807/74091533","heartbeat_front_addr":"192.168.123.106:6806/74091533","state":["exists","up"]},{"osd":6,"uuid":"2c29ba87-17ff-4e7b-aed3-65d6fd9b7afe","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":38,"up_thru":50,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6808","nonce":3799364875}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6809","nonce":3799364875}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6811","nonce":3799364875}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6810","nonce":3799364875}]},"public_addr":"192.168.123.106:6808/3799364875","cluster_addr":"192.168.123.106:6809/3799364875","heartbeat_back_addr":"192.168.123.106:6811/3799364875","heartbeat_front_addr":"192.168.123.106:6810/3799364875","state":["exists","up"]},{"osd":7,"uuid":"24d6a21c-5f7e-4d3e-b64c-8a5679e9e064","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":43,"up_thru":52,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6812","nonce":1491932823}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6813","nonce":1491932823}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6815","nonce":1491932823}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6814","nonce":1491932823}]},"public_addr":"192.168.123.106:6812/1491932823","cluster_addr":"192.168.123.106:6813/1491932823","heartbeat_back_addr":"192.168.123.106:6815/1491932823","heartbeat_front_addr":"192.168.123.106:6814/1491932823","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T08:34:47.495650+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T08:34:59.911349+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T08:35:12.363922+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T08:35:22.492831+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T08:35:32.691474+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T08:35:43.443554+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T08:35:56.215429+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T08:36:06.618270+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.103:0/1658303687":"2026-03-11T08:34:10.234836+0000","192.168.123.103:0/2863700931":"2026-03-11T08:34:10.234836+0000","192.168.123.103:0/2828116415":"2026-03-11T08:34:01.295363+0000","192.168.123.103:0/1981351970":"2026-03-11T08:34:10.234836+0000","192.168.123.103:6800/3087347888":"2026-03-11T08:34:01.295363+0000","192.168.123.103:0/3889845919":"2026-03-11T08:34:01.295363+0000","192.168.123.103:6800/1917313635":"2026-03-11T08:34:10.234836+0000","192.168.123.103:0/1356345355":"2026-03-11T08:34:01.295363+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[{"pool":2,"snaps":[{"begin":2,"length":1}]}],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T08:36:20.223 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- ceph tell osd.0 flush_pg_stats 2026-03-10T08:36:20.224 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- ceph tell osd.1 flush_pg_stats 2026-03-10T08:36:20.224 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- ceph tell osd.2 flush_pg_stats 2026-03-10T08:36:20.224 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- ceph tell osd.3 flush_pg_stats 2026-03-10T08:36:20.224 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- ceph tell osd.4 flush_pg_stats 2026-03-10T08:36:20.224 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- ceph tell osd.5 flush_pg_stats 2026-03-10T08:36:20.224 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- ceph tell osd.6 flush_pg_stats 2026-03-10T08:36:20.224 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- ceph tell osd.7 flush_pg_stats 2026-03-10T08:36:20.586 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:20 vm06 ceph-mon[54477]: from='client.24383 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-10T08:36:20.586 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:20 vm06 ceph-mon[54477]: from='client.24385 ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-10T08:36:20.586 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:20 vm06 ceph-mon[54477]: osdmap e54: 8 total, 8 up, 8 in 2026-03-10T08:36:20.586 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:20 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/2092350618' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T08:36:20.586 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:20 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:20.586 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:20 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:20.586 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:20 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:20.586 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:20 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T08:36:20.586 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:20 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-10T08:36:20.586 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:20 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:36:20.586 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:20 vm06 ceph-mon[54477]: Deploying daemon iscsi.iscsi.a on vm06 2026-03-10T08:36:20.586 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:20 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/2976775468' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T08:36:20.586 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:20 vm06 ceph-mon[54477]: pgmap v109: 132 pgs: 12 creating+peering, 11 unknown, 109 active+clean; 452 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 45 KiB/s rd, 3.7 KiB/s wr, 109 op/s 2026-03-10T08:36:20.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:20 vm03 ceph-mon[50703]: from='client.24383 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-10T08:36:20.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:20 vm03 ceph-mon[50703]: from='client.24385 ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-10T08:36:20.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:20 vm03 ceph-mon[50703]: osdmap e54: 8 total, 8 up, 8 in 2026-03-10T08:36:20.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:20 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/2092350618' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T08:36:20.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:20 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:20.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:20 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:20.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:20 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:20.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:20 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T08:36:20.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:20 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-10T08:36:20.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:20 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:36:20.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:20 vm03 ceph-mon[50703]: Deploying daemon iscsi.iscsi.a on vm06 2026-03-10T08:36:20.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:20 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/2976775468' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T08:36:20.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:20 vm03 ceph-mon[50703]: pgmap v109: 132 pgs: 12 creating+peering, 11 unknown, 109 active+clean; 452 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 45 KiB/s rd, 3.7 KiB/s wr, 109 op/s 2026-03-10T08:36:20.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:20 vm03 ceph-mon[57160]: from='client.24383 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-10T08:36:20.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:20 vm03 ceph-mon[57160]: from='client.24385 ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-10T08:36:20.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:20 vm03 ceph-mon[57160]: osdmap e54: 8 total, 8 up, 8 in 2026-03-10T08:36:20.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:20 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/2092350618' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T08:36:20.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:20 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:20.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:20 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:20.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:20 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:20.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:20 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T08:36:20.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:20 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-10T08:36:20.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:20 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:36:20.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:20 vm03 ceph-mon[57160]: Deploying daemon iscsi.iscsi.a on vm06 2026-03-10T08:36:20.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:20 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/2976775468' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T08:36:20.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:20 vm03 ceph-mon[57160]: pgmap v109: 132 pgs: 12 creating+peering, 11 unknown, 109 active+clean; 452 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 45 KiB/s rd, 3.7 KiB/s wr, 109 op/s 2026-03-10T08:36:20.840 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:36:20 vm06 systemd[1]: Starting Ceph iscsi.iscsi.a for aaf0329a-1c5b-11f1-8b6f-7f2d819bb543... 2026-03-10T08:36:20.943 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.a/config 2026-03-10T08:36:21.129 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:36:20 vm06 podman[78499]: 2026-03-10 08:36:20.874283741 +0000 UTC m=+0.023519043 container create fdebdac5e54aea5a4e4ddfe10cc350120d84611e9b75a6c97c8ba906615949d6 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-10T08:36:21.129 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:36:20 vm06 podman[78499]: 2026-03-10 08:36:20.906400425 +0000 UTC m=+0.055635727 container init fdebdac5e54aea5a4e4ddfe10cc350120d84611e9b75a6c97c8ba906615949d6 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20260223, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0) 2026-03-10T08:36:21.129 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:36:20 vm06 podman[78499]: 2026-03-10 08:36:20.913360644 +0000 UTC m=+0.062595946 container start fdebdac5e54aea5a4e4ddfe10cc350120d84611e9b75a6c97c8ba906615949d6 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3) 2026-03-10T08:36:21.129 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:36:20 vm06 bash[78499]: fdebdac5e54aea5a4e4ddfe10cc350120d84611e9b75a6c97c8ba906615949d6 2026-03-10T08:36:21.129 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:36:20 vm06 podman[78499]: 2026-03-10 08:36:20.867307713 +0000 UTC m=+0.016543026 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T08:36:21.129 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:36:20 vm06 systemd[1]: Started Ceph iscsi.iscsi.a for aaf0329a-1c5b-11f1-8b6f-7f2d819bb543. 2026-03-10T08:36:21.254 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.a/config 2026-03-10T08:36:21.268 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.a/config 2026-03-10T08:36:21.269 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.a/config 2026-03-10T08:36:21.278 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.a/config 2026-03-10T08:36:21.298 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.a/config 2026-03-10T08:36:21.307 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.a/config 2026-03-10T08:36:21.309 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.a/config 2026-03-10T08:36:21.423 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:21 vm06 ceph-mon[54477]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled) 2026-03-10T08:36:21.424 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:21 vm06 ceph-mon[54477]: Cluster is now healthy 2026-03-10T08:36:21.424 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:21 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:21.424 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:21 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:21.424 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:21 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:21.424 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:36:21 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a[78511]: debug Started the configuration object watcher 2026-03-10T08:36:21.424 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:36:21 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a[78511]: debug Checking for config object changes every 1s 2026-03-10T08:36:21.424 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:36:21 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a[78511]: debug Processing osd blocklist entries for this node 2026-03-10T08:36:21.604 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:21 vm03 ceph-mon[50703]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled) 2026-03-10T08:36:21.604 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:21 vm03 ceph-mon[50703]: Cluster is now healthy 2026-03-10T08:36:21.605 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:21 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:21.605 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:21 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:21.605 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:21 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:21.605 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:21 vm03 ceph-mon[50703]: Checking pool "datapool" exists for service iscsi.datapool 2026-03-10T08:36:21.605 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:21 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:21.605 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:21 vm03 ceph-mon[50703]: Deploying daemon prometheus.a on vm06 2026-03-10T08:36:21.605 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:21 vm03 ceph-mon[50703]: osdmap e55: 8 total, 8 up, 8 in 2026-03-10T08:36:21.605 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:21 vm03 ceph-mon[57160]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled) 2026-03-10T08:36:21.605 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:21 vm03 ceph-mon[57160]: Cluster is now healthy 2026-03-10T08:36:21.605 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:21 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:21.605 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:21 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:21.605 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:21 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:21.605 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:21 vm03 ceph-mon[57160]: Checking pool "datapool" exists for service iscsi.datapool 2026-03-10T08:36:21.605 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:21 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:21.605 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:21 vm03 ceph-mon[57160]: Deploying daemon prometheus.a on vm06 2026-03-10T08:36:21.605 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:21 vm03 ceph-mon[57160]: osdmap e55: 8 total, 8 up, 8 in 2026-03-10T08:36:21.638 INFO:teuthology.orchestra.run.vm03.stdout:51539607569 2026-03-10T08:36:21.638 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- ceph osd last-stat-seq osd.1 2026-03-10T08:36:21.840 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:21 vm06 ceph-mon[54477]: Checking pool "datapool" exists for service iscsi.datapool 2026-03-10T08:36:21.840 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:21 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:21.840 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:21 vm06 ceph-mon[54477]: Deploying daemon prometheus.a on vm06 2026-03-10T08:36:21.840 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:21 vm06 ceph-mon[54477]: osdmap e55: 8 total, 8 up, 8 in 2026-03-10T08:36:21.840 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:36:21 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a[78511]: debug Reading the configuration object to update local LIO configuration 2026-03-10T08:36:21.841 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:36:21 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a[78511]: debug Configuration does not have an entry for this host(vm06.local) - nothing to define to LIO 2026-03-10T08:36:21.841 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:36:21 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a[78511]: * Serving Flask app 'rbd-target-api' (lazy loading) 2026-03-10T08:36:21.841 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:36:21 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a[78511]: * Environment: production 2026-03-10T08:36:21.841 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:36:21 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a[78511]: WARNING: This is a development server. Do not use it in a production deployment. 2026-03-10T08:36:21.841 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:36:21 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a[78511]: Use a production WSGI server instead. 2026-03-10T08:36:21.841 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:36:21 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a[78511]: * Debug mode: off 2026-03-10T08:36:21.841 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:36:21 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a[78511]: debug * Running on all addresses. 2026-03-10T08:36:21.841 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:36:21 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a[78511]: WARNING: This is a development server. Do not use it in a production deployment. 2026-03-10T08:36:21.841 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:36:21 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a[78511]: * Running on all addresses. 2026-03-10T08:36:21.841 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:36:21 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a[78511]: WARNING: This is a development server. Do not use it in a production deployment. 2026-03-10T08:36:21.841 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:36:21 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a[78511]: debug * Running on http://[::1]:5000/ (Press CTRL+C to quit) 2026-03-10T08:36:21.841 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:36:21 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a[78511]: * Running on http://[::1]:5000/ (Press CTRL+C to quit) 2026-03-10T08:36:22.242 INFO:teuthology.orchestra.run.vm03.stdout:68719476751 2026-03-10T08:36:22.242 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- ceph osd last-stat-seq osd.2 2026-03-10T08:36:22.294 INFO:teuthology.orchestra.run.vm03.stdout:98784247821 2026-03-10T08:36:22.294 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- ceph osd last-stat-seq osd.3 2026-03-10T08:36:22.307 INFO:teuthology.orchestra.run.vm03.stdout:120259084299 2026-03-10T08:36:22.308 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- ceph osd last-stat-seq osd.4 2026-03-10T08:36:22.325 INFO:teuthology.orchestra.run.vm03.stdout:184683593732 2026-03-10T08:36:22.325 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- ceph osd last-stat-seq osd.7 2026-03-10T08:36:22.345 INFO:teuthology.orchestra.run.vm03.stdout:141733920777 2026-03-10T08:36:22.345 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.a/config 2026-03-10T08:36:22.345 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- ceph osd last-stat-seq osd.5 2026-03-10T08:36:22.366 INFO:teuthology.orchestra.run.vm03.stdout:163208757254 2026-03-10T08:36:22.367 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- ceph osd last-stat-seq osd.6 2026-03-10T08:36:22.390 INFO:teuthology.orchestra.run.vm03.stdout:34359738388 2026-03-10T08:36:22.391 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- ceph osd last-stat-seq osd.0 2026-03-10T08:36:22.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:22 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.106:0/4258037186' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T08:36:22.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:22 vm03 ceph-mon[50703]: pgmap v111: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 96 KiB/s rd, 7.6 KiB/s wr, 233 op/s 2026-03-10T08:36:22.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:22 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.106:0/4258037186' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T08:36:22.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:22 vm03 ceph-mon[57160]: pgmap v111: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 96 KiB/s rd, 7.6 KiB/s wr, 233 op/s 2026-03-10T08:36:22.714 INFO:teuthology.orchestra.run.vm03.stdout:51539607569 2026-03-10T08:36:22.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:22 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.106:0/4258037186' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T08:36:22.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:22 vm06 ceph-mon[54477]: pgmap v111: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 96 KiB/s rd, 7.6 KiB/s wr, 233 op/s 2026-03-10T08:36:22.856 INFO:tasks.cephadm.ceph_manager.ceph:need seq 51539607569 got 51539607569 for osd.1 2026-03-10T08:36:22.856 DEBUG:teuthology.parallel:result is None 2026-03-10T08:36:23.169 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.a/config 2026-03-10T08:36:23.309 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.a/config 2026-03-10T08:36:23.314 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.a/config 2026-03-10T08:36:23.407 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.a/config 2026-03-10T08:36:23.434 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.a/config 2026-03-10T08:36:23.604 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.a/config 2026-03-10T08:36:23.608 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:23 vm03 ceph-mon[57160]: mgrmap e16: y(active, since 2m), standbys: x 2026-03-10T08:36:23.608 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:23 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/3049387092' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T08:36:23.608 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:23 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:23.608 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:23 vm03 ceph-mon[50703]: mgrmap e16: y(active, since 2m), standbys: x 2026-03-10T08:36:23.608 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:23 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/3049387092' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T08:36:23.608 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:23 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:23.704 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.a/config 2026-03-10T08:36:23.818 INFO:teuthology.orchestra.run.vm03.stdout:98784247821 2026-03-10T08:36:23.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:23 vm06 ceph-mon[54477]: mgrmap e16: y(active, since 2m), standbys: x 2026-03-10T08:36:23.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:23 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/3049387092' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T08:36:23.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:23 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:23.979 INFO:teuthology.orchestra.run.vm03.stdout:184683593732 2026-03-10T08:36:24.157 INFO:tasks.cephadm.ceph_manager.ceph:need seq 98784247821 got 98784247821 for osd.3 2026-03-10T08:36:24.158 DEBUG:teuthology.parallel:result is None 2026-03-10T08:36:24.189 INFO:teuthology.orchestra.run.vm03.stdout:141733920777 2026-03-10T08:36:24.191 INFO:tasks.cephadm.ceph_manager.ceph:need seq 184683593732 got 184683593732 for osd.7 2026-03-10T08:36:24.191 DEBUG:teuthology.parallel:result is None 2026-03-10T08:36:24.333 INFO:tasks.cephadm.ceph_manager.ceph:need seq 141733920777 got 141733920777 for osd.5 2026-03-10T08:36:24.333 DEBUG:teuthology.parallel:result is None 2026-03-10T08:36:24.362 INFO:teuthology.orchestra.run.vm03.stdout:120259084299 2026-03-10T08:36:24.408 INFO:teuthology.orchestra.run.vm03.stdout:68719476751 2026-03-10T08:36:24.496 INFO:teuthology.orchestra.run.vm03.stdout:34359738388 2026-03-10T08:36:24.511 INFO:tasks.cephadm.ceph_manager.ceph:need seq 120259084299 got 120259084299 for osd.4 2026-03-10T08:36:24.511 DEBUG:teuthology.parallel:result is None 2026-03-10T08:36:24.515 INFO:teuthology.orchestra.run.vm03.stdout:163208757255 2026-03-10T08:36:24.573 INFO:tasks.cephadm.ceph_manager.ceph:need seq 68719476751 got 68719476751 for osd.2 2026-03-10T08:36:24.573 DEBUG:teuthology.parallel:result is None 2026-03-10T08:36:24.606 INFO:tasks.cephadm.ceph_manager.ceph:need seq 163208757254 got 163208757255 for osd.6 2026-03-10T08:36:24.606 DEBUG:teuthology.parallel:result is None 2026-03-10T08:36:24.627 INFO:tasks.cephadm.ceph_manager.ceph:need seq 34359738388 got 34359738388 for osd.0 2026-03-10T08:36:24.627 DEBUG:teuthology.parallel:result is None 2026-03-10T08:36:24.627 INFO:tasks.cephadm.ceph_manager.ceph:waiting for clean 2026-03-10T08:36:24.627 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- ceph pg dump --format=json 2026-03-10T08:36:24.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:24 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/2723847900' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-10T08:36:24.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:24 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/1706710587' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-10T08:36:24.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:24 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/3303848520' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-10T08:36:24.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:24 vm03 ceph-mon[57160]: pgmap v112: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 77 KiB/s rd, 5.8 KiB/s wr, 187 op/s 2026-03-10T08:36:24.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:24 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/2857671949' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-10T08:36:24.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:24 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/322376859' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T08:36:24.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:24 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/2723847900' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-10T08:36:24.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:24 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/1706710587' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-10T08:36:24.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:24 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/3303848520' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-10T08:36:24.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:24 vm03 ceph-mon[50703]: pgmap v112: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 77 KiB/s rd, 5.8 KiB/s wr, 187 op/s 2026-03-10T08:36:24.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:24 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/2857671949' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-10T08:36:24.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:24 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/322376859' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T08:36:24.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:24 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/2723847900' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-10T08:36:24.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:24 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/1706710587' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-10T08:36:24.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:24 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/3303848520' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-10T08:36:24.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:24 vm06 ceph-mon[54477]: pgmap v112: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 77 KiB/s rd, 5.8 KiB/s wr, 187 op/s 2026-03-10T08:36:24.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:24 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/2857671949' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-10T08:36:24.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:24 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/322376859' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T08:36:24.854 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.a/config 2026-03-10T08:36:25.124 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:36:25.127 INFO:teuthology.orchestra.run.vm03.stderr:dumped all 2026-03-10T08:36:25.205 INFO:teuthology.orchestra.run.vm03.stdout:{"pg_ready":true,"pg_map":{"version":112,"stamp":"2026-03-10T08:36:24.266906+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":465419,"num_objects":199,"num_object_clones":0,"num_object_copies":597,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":199,"num_whiteouts":0,"num_read":774,"num_read_kb":517,"num_write":493,"num_write_kb":629,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":505,"ondisk_log_size":505,"up":396,"acting":396,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":375,"num_osds":8,"num_per_pool_osds":8,"num_per_pool_omap_osds":8,"kb":167739392,"kb_used":220456,"kb_used_data":5764,"kb_used_omap":12,"kb_used_meta":214515,"kb_avail":167518936,"statfs":{"total":171765137408,"available":171539390464,"internally_reserved":0,"allocated":5902336,"data_stored":3068973,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":12712,"internal_metadata":219663960},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":26,"apply_latency_ms":26,"commit_latency_ns":26000000,"apply_latency_ns":26000000},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":4325,"num_objects":186,"num_object_clones":0,"num_object_copies":558,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":186,"num_whiteouts":0,"num_read":704,"num_read_kb":460,"num_write":421,"num_write_kb":35,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"6.001260"},"pg_stats":[{"pgid":"3.1f","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.880307+0000","last_change":"2026-03-10T08:36:12.411018+0000","last_active":"2026-03-10T08:36:21.880307+0000","last_peered":"2026-03-10T08:36:21.880307+0000","last_clean":"2026-03-10T08:36:21.880307+0000","last_became_active":"2026-03-10T08:36:12.404523+0000","last_became_peered":"2026-03-10T08:36:12.404523+0000","last_unstale":"2026-03-10T08:36:21.880307+0000","last_undegraded":"2026-03-10T08:36:21.880307+0000","last_fullsized":"2026-03-10T08:36:21.880307+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:22:43.385552+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,2],"acting":[0,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"4.18","version":"54'9","reported_seq":39,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.406454+0000","last_change":"2026-03-10T08:36:14.410669+0000","last_active":"2026-03-10T08:36:21.406454+0000","last_peered":"2026-03-10T08:36:21.406454+0000","last_clean":"2026-03-10T08:36:21.406454+0000","last_became_active":"2026-03-10T08:36:14.410443+0000","last_became_peered":"2026-03-10T08:36:14.410443+0000","last_unstale":"2026-03-10T08:36:21.406454+0000","last_undegraded":"2026-03-10T08:36:21.406454+0000","last_fullsized":"2026-03-10T08:36:21.406454+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_clean_scrub_stamp":"2026-03-10T08:36:13.382053+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:32:55.455883+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,1],"acting":[4,6,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.19","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.893165+0000","last_change":"2026-03-10T08:36:16.424737+0000","last_active":"2026-03-10T08:36:21.893165+0000","last_peered":"2026-03-10T08:36:21.893165+0000","last_clean":"2026-03-10T08:36:21.893165+0000","last_became_active":"2026-03-10T08:36:16.424616+0000","last_became_peered":"2026-03-10T08:36:16.424616+0000","last_unstale":"2026-03-10T08:36:21.893165+0000","last_undegraded":"2026-03-10T08:36:21.893165+0000","last_fullsized":"2026-03-10T08:36:21.893165+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_clean_scrub_stamp":"2026-03-10T08:36:15.387844+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T09:58:45.690547+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,7],"acting":[1,5,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.1a","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.406428+0000","last_change":"2026-03-10T08:36:18.445517+0000","last_active":"2026-03-10T08:36:21.406428+0000","last_peered":"2026-03-10T08:36:21.406428+0000","last_clean":"2026-03-10T08:36:21.406428+0000","last_became_active":"2026-03-10T08:36:18.445429+0000","last_became_peered":"2026-03-10T08:36:18.445429+0000","last_unstale":"2026-03-10T08:36:21.406428+0000","last_undegraded":"2026-03-10T08:36:21.406428+0000","last_fullsized":"2026-03-10T08:36:21.406428+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_clean_scrub_stamp":"2026-03-10T08:36:17.394330+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:41:48.603648+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,1],"acting":[4,5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"6.1b","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.886485+0000","last_change":"2026-03-10T08:36:18.452759+0000","last_active":"2026-03-10T08:36:21.886485+0000","last_peered":"2026-03-10T08:36:21.886485+0000","last_clean":"2026-03-10T08:36:21.886485+0000","last_became_active":"2026-03-10T08:36:18.452416+0000","last_became_peered":"2026-03-10T08:36:18.452416+0000","last_unstale":"2026-03-10T08:36:21.886485+0000","last_undegraded":"2026-03-10T08:36:21.886485+0000","last_fullsized":"2026-03-10T08:36:21.886485+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_clean_scrub_stamp":"2026-03-10T08:36:17.394330+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:42:23.883890+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,6],"acting":[3,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.1e","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.886510+0000","last_change":"2026-03-10T08:36:12.412903+0000","last_active":"2026-03-10T08:36:21.886510+0000","last_peered":"2026-03-10T08:36:21.886510+0000","last_clean":"2026-03-10T08:36:21.886510+0000","last_became_active":"2026-03-10T08:36:12.412759+0000","last_became_peered":"2026-03-10T08:36:12.412759+0000","last_unstale":"2026-03-10T08:36:21.886510+0000","last_undegraded":"2026-03-10T08:36:21.886510+0000","last_fullsized":"2026-03-10T08:36:21.886510+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:38:34.431855+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,2],"acting":[3,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"4.19","version":"54'15","reported_seq":48,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.886536+0000","last_change":"2026-03-10T08:36:14.407669+0000","last_active":"2026-03-10T08:36:21.886536+0000","last_peered":"2026-03-10T08:36:21.886536+0000","last_clean":"2026-03-10T08:36:21.886536+0000","last_became_active":"2026-03-10T08:36:14.407572+0000","last_became_peered":"2026-03-10T08:36:14.407572+0000","last_unstale":"2026-03-10T08:36:21.886536+0000","last_undegraded":"2026-03-10T08:36:21.886536+0000","last_fullsized":"2026-03-10T08:36:21.886536+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_clean_scrub_stamp":"2026-03-10T08:36:13.382053+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:42:57.447641+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,2,0],"acting":[3,2,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.18","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.404044+0000","last_change":"2026-03-10T08:36:16.435838+0000","last_active":"2026-03-10T08:36:21.404044+0000","last_peered":"2026-03-10T08:36:21.404044+0000","last_clean":"2026-03-10T08:36:21.404044+0000","last_became_active":"2026-03-10T08:36:16.435749+0000","last_became_peered":"2026-03-10T08:36:16.435749+0000","last_unstale":"2026-03-10T08:36:21.404044+0000","last_undegraded":"2026-03-10T08:36:21.404044+0000","last_fullsized":"2026-03-10T08:36:21.404044+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_clean_scrub_stamp":"2026-03-10T08:36:15.387844+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:00:26.892511+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,1],"acting":[4,6,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.1d","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.393524+0000","last_change":"2026-03-10T08:36:12.406510+0000","last_active":"2026-03-10T08:36:21.393524+0000","last_peered":"2026-03-10T08:36:21.393524+0000","last_clean":"2026-03-10T08:36:21.393524+0000","last_became_active":"2026-03-10T08:36:12.406315+0000","last_became_peered":"2026-03-10T08:36:12.406315+0000","last_unstale":"2026-03-10T08:36:21.393524+0000","last_undegraded":"2026-03-10T08:36:21.393524+0000","last_fullsized":"2026-03-10T08:36:21.393524+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T09:47:24.821385+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,6],"acting":[5,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"4.1a","version":"54'9","reported_seq":39,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.404644+0000","last_change":"2026-03-10T08:36:14.408889+0000","last_active":"2026-03-10T08:36:21.404644+0000","last_peered":"2026-03-10T08:36:21.404644+0000","last_clean":"2026-03-10T08:36:21.404644+0000","last_became_active":"2026-03-10T08:36:14.408814+0000","last_became_peered":"2026-03-10T08:36:14.408814+0000","last_unstale":"2026-03-10T08:36:21.404644+0000","last_undegraded":"2026-03-10T08:36:21.404644+0000","last_fullsized":"2026-03-10T08:36:21.404644+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_clean_scrub_stamp":"2026-03-10T08:36:13.382053+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:14:59.985223+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,0],"acting":[4,3,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.1b","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.393587+0000","last_change":"2026-03-10T08:36:16.429130+0000","last_active":"2026-03-10T08:36:21.393587+0000","last_peered":"2026-03-10T08:36:21.393587+0000","last_clean":"2026-03-10T08:36:21.393587+0000","last_became_active":"2026-03-10T08:36:16.428583+0000","last_became_peered":"2026-03-10T08:36:16.428583+0000","last_unstale":"2026-03-10T08:36:21.393587+0000","last_undegraded":"2026-03-10T08:36:21.393587+0000","last_fullsized":"2026-03-10T08:36:21.393587+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_clean_scrub_stamp":"2026-03-10T08:36:15.387844+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:38:43.808128+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,0,7],"acting":[5,0,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.18","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.880353+0000","last_change":"2026-03-10T08:36:18.451823+0000","last_active":"2026-03-10T08:36:21.880353+0000","last_peered":"2026-03-10T08:36:21.880353+0000","last_clean":"2026-03-10T08:36:21.880353+0000","last_became_active":"2026-03-10T08:36:18.451739+0000","last_became_peered":"2026-03-10T08:36:18.451739+0000","last_unstale":"2026-03-10T08:36:21.880353+0000","last_undegraded":"2026-03-10T08:36:21.880353+0000","last_fullsized":"2026-03-10T08:36:21.880353+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_clean_scrub_stamp":"2026-03-10T08:36:17.394330+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:03:15.740181+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,7],"acting":[0,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.1c","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.393782+0000","last_change":"2026-03-10T08:36:12.393117+0000","last_active":"2026-03-10T08:36:21.393782+0000","last_peered":"2026-03-10T08:36:21.393782+0000","last_clean":"2026-03-10T08:36:21.393782+0000","last_became_active":"2026-03-10T08:36:12.392842+0000","last_became_peered":"2026-03-10T08:36:12.392842+0000","last_unstale":"2026-03-10T08:36:21.393782+0000","last_undegraded":"2026-03-10T08:36:21.393782+0000","last_fullsized":"2026-03-10T08:36:21.393782+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:23:14.416398+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,1],"acting":[5,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"4.1b","version":"54'5","reported_seq":33,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.404313+0000","last_change":"2026-03-10T08:36:14.413566+0000","last_active":"2026-03-10T08:36:21.404313+0000","last_peered":"2026-03-10T08:36:21.404313+0000","last_clean":"2026-03-10T08:36:21.404313+0000","last_became_active":"2026-03-10T08:36:14.413188+0000","last_became_peered":"2026-03-10T08:36:14.413188+0000","last_unstale":"2026-03-10T08:36:21.404313+0000","last_undegraded":"2026-03-10T08:36:21.404313+0000","last_fullsized":"2026-03-10T08:36:21.404313+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_clean_scrub_stamp":"2026-03-10T08:36:13.382053+0000","objects_scrubbed":0,"log_size":5,"log_dups_size":0,"ondisk_log_size":5,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:58:32.802328+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":11,"num_read_kb":7,"num_write":6,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,1],"acting":[4,3,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.1a","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.399119+0000","last_change":"2026-03-10T08:36:16.438694+0000","last_active":"2026-03-10T08:36:21.399119+0000","last_peered":"2026-03-10T08:36:21.399119+0000","last_clean":"2026-03-10T08:36:21.399119+0000","last_became_active":"2026-03-10T08:36:16.432482+0000","last_became_peered":"2026-03-10T08:36:16.432482+0000","last_unstale":"2026-03-10T08:36:21.399119+0000","last_undegraded":"2026-03-10T08:36:21.399119+0000","last_fullsized":"2026-03-10T08:36:21.399119+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_clean_scrub_stamp":"2026-03-10T08:36:15.387844+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:30:46.058949+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,1],"acting":[7,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.19","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.393750+0000","last_change":"2026-03-10T08:36:18.441090+0000","last_active":"2026-03-10T08:36:21.393750+0000","last_peered":"2026-03-10T08:36:21.393750+0000","last_clean":"2026-03-10T08:36:21.393750+0000","last_became_active":"2026-03-10T08:36:18.441016+0000","last_became_peered":"2026-03-10T08:36:18.441016+0000","last_unstale":"2026-03-10T08:36:21.393750+0000","last_undegraded":"2026-03-10T08:36:21.393750+0000","last_fullsized":"2026-03-10T08:36:21.393750+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_clean_scrub_stamp":"2026-03-10T08:36:17.394330+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:35:48.763286+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,3],"acting":[5,1,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.1e","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.404949+0000","last_change":"2026-03-10T08:36:18.445826+0000","last_active":"2026-03-10T08:36:21.404949+0000","last_peered":"2026-03-10T08:36:21.404949+0000","last_clean":"2026-03-10T08:36:21.404949+0000","last_became_active":"2026-03-10T08:36:18.445743+0000","last_became_peered":"2026-03-10T08:36:18.445743+0000","last_unstale":"2026-03-10T08:36:21.404949+0000","last_undegraded":"2026-03-10T08:36:21.404949+0000","last_fullsized":"2026-03-10T08:36:21.404949+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_clean_scrub_stamp":"2026-03-10T08:36:17.394330+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:09:49.924793+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,5],"acting":[4,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.1b","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.880127+0000","last_change":"2026-03-10T08:36:12.410439+0000","last_active":"2026-03-10T08:36:21.880127+0000","last_peered":"2026-03-10T08:36:21.880127+0000","last_clean":"2026-03-10T08:36:21.880127+0000","last_became_active":"2026-03-10T08:36:12.404612+0000","last_became_peered":"2026-03-10T08:36:12.404612+0000","last_unstale":"2026-03-10T08:36:21.880127+0000","last_undegraded":"2026-03-10T08:36:21.880127+0000","last_fullsized":"2026-03-10T08:36:21.880127+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:14:38.545641+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,4,7],"acting":[0,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"4.1c","version":"54'15","reported_seq":48,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.883710+0000","last_change":"2026-03-10T08:36:14.410151+0000","last_active":"2026-03-10T08:36:21.883710+0000","last_peered":"2026-03-10T08:36:21.883710+0000","last_clean":"2026-03-10T08:36:21.883710+0000","last_became_active":"2026-03-10T08:36:14.410061+0000","last_became_peered":"2026-03-10T08:36:14.410061+0000","last_unstale":"2026-03-10T08:36:21.883710+0000","last_undegraded":"2026-03-10T08:36:21.883710+0000","last_fullsized":"2026-03-10T08:36:21.883710+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_clean_scrub_stamp":"2026-03-10T08:36:13.382053+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:49:33.285694+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,1,3],"acting":[2,1,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.1d","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.893214+0000","last_change":"2026-03-10T08:36:16.427541+0000","last_active":"2026-03-10T08:36:21.893214+0000","last_peered":"2026-03-10T08:36:21.893214+0000","last_clean":"2026-03-10T08:36:21.893214+0000","last_became_active":"2026-03-10T08:36:16.427424+0000","last_became_peered":"2026-03-10T08:36:16.427424+0000","last_unstale":"2026-03-10T08:36:21.893214+0000","last_undegraded":"2026-03-10T08:36:21.893214+0000","last_fullsized":"2026-03-10T08:36:21.893214+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_clean_scrub_stamp":"2026-03-10T08:36:15.387844+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T09:00:59.337209+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,4,0],"acting":[1,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.1f","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.899043+0000","last_change":"2026-03-10T08:36:18.428674+0000","last_active":"2026-03-10T08:36:21.899043+0000","last_peered":"2026-03-10T08:36:21.899043+0000","last_clean":"2026-03-10T08:36:21.899043+0000","last_became_active":"2026-03-10T08:36:18.428583+0000","last_became_peered":"2026-03-10T08:36:18.428583+0000","last_unstale":"2026-03-10T08:36:21.899043+0000","last_undegraded":"2026-03-10T08:36:21.899043+0000","last_fullsized":"2026-03-10T08:36:21.899043+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_clean_scrub_stamp":"2026-03-10T08:36:17.394330+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:08:41.688385+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,5],"acting":[3,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.1a","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.404378+0000","last_change":"2026-03-10T08:36:12.412700+0000","last_active":"2026-03-10T08:36:21.404378+0000","last_peered":"2026-03-10T08:36:21.404378+0000","last_clean":"2026-03-10T08:36:21.404378+0000","last_became_active":"2026-03-10T08:36:12.412581+0000","last_became_peered":"2026-03-10T08:36:12.412581+0000","last_unstale":"2026-03-10T08:36:21.404378+0000","last_undegraded":"2026-03-10T08:36:21.404378+0000","last_fullsized":"2026-03-10T08:36:21.404378+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:42:24.337857+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,1,2],"acting":[4,1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"4.1d","version":"54'12","reported_seq":46,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.899084+0000","last_change":"2026-03-10T08:36:14.414474+0000","last_active":"2026-03-10T08:36:21.899084+0000","last_peered":"2026-03-10T08:36:21.899084+0000","last_clean":"2026-03-10T08:36:21.899084+0000","last_became_active":"2026-03-10T08:36:14.414334+0000","last_became_peered":"2026-03-10T08:36:14.414334+0000","last_unstale":"2026-03-10T08:36:21.899084+0000","last_undegraded":"2026-03-10T08:36:21.899084+0000","last_fullsized":"2026-03-10T08:36:21.899084+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_clean_scrub_stamp":"2026-03-10T08:36:13.382053+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:59:09.761735+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":25,"num_read_kb":16,"num_write":14,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,4],"acting":[3,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.1c","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.404330+0000","last_change":"2026-03-10T08:36:16.431922+0000","last_active":"2026-03-10T08:36:21.404330+0000","last_peered":"2026-03-10T08:36:21.404330+0000","last_clean":"2026-03-10T08:36:21.404330+0000","last_became_active":"2026-03-10T08:36:16.431650+0000","last_became_peered":"2026-03-10T08:36:16.431650+0000","last_unstale":"2026-03-10T08:36:21.404330+0000","last_undegraded":"2026-03-10T08:36:21.404330+0000","last_fullsized":"2026-03-10T08:36:21.404330+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_clean_scrub_stamp":"2026-03-10T08:36:15.387844+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:26:25.723031+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,2],"acting":[4,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"6.1c","version":"54'1","reported_seq":16,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.403285+0000","last_change":"2026-03-10T08:36:18.427858+0000","last_active":"2026-03-10T08:36:21.403285+0000","last_peered":"2026-03-10T08:36:21.403285+0000","last_clean":"2026-03-10T08:36:21.403285+0000","last_became_active":"2026-03-10T08:36:18.418006+0000","last_became_peered":"2026-03-10T08:36:18.418006+0000","last_unstale":"2026-03-10T08:36:21.403285+0000","last_undegraded":"2026-03-10T08:36:21.403285+0000","last_fullsized":"2026-03-10T08:36:21.403285+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_clean_scrub_stamp":"2026-03-10T08:36:17.394330+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:05:56.566393+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":403,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,5,2],"acting":[7,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.19","version":"47'2","reported_seq":34,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.892700+0000","last_change":"2026-03-10T08:36:12.405689+0000","last_active":"2026-03-10T08:36:21.892700+0000","last_peered":"2026-03-10T08:36:21.892700+0000","last_clean":"2026-03-10T08:36:21.892700+0000","last_became_active":"2026-03-10T08:36:12.398785+0000","last_became_peered":"2026-03-10T08:36:12.398785+0000","last_unstale":"2026-03-10T08:36:21.892700+0000","last_undegraded":"2026-03-10T08:36:21.892700+0000","last_fullsized":"2026-03-10T08:36:21.892700+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:55:35.563142+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":1039,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":7,"num_read_kb":7,"num_write":3,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,3,4],"acting":[1,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"4.1e","version":"54'10","reported_seq":38,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.880855+0000","last_change":"2026-03-10T08:36:14.478003+0000","last_active":"2026-03-10T08:36:21.880855+0000","last_peered":"2026-03-10T08:36:21.880855+0000","last_clean":"2026-03-10T08:36:21.880855+0000","last_became_active":"2026-03-10T08:36:14.477861+0000","last_became_peered":"2026-03-10T08:36:14.477861+0000","last_unstale":"2026-03-10T08:36:21.880855+0000","last_undegraded":"2026-03-10T08:36:21.880855+0000","last_fullsized":"2026-03-10T08:36:21.880855+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_clean_scrub_stamp":"2026-03-10T08:36:13.382053+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:54:25.539408+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,3],"acting":[0,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.1f","version":"54'8","reported_seq":33,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.453178+0000","last_change":"2026-03-10T08:36:16.433348+0000","last_active":"2026-03-10T08:36:21.453178+0000","last_peered":"2026-03-10T08:36:21.453178+0000","last_clean":"2026-03-10T08:36:21.453178+0000","last_became_active":"2026-03-10T08:36:16.433094+0000","last_became_peered":"2026-03-10T08:36:16.433094+0000","last_unstale":"2026-03-10T08:36:21.453178+0000","last_undegraded":"2026-03-10T08:36:21.453178+0000","last_fullsized":"2026-03-10T08:36:21.453178+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_clean_scrub_stamp":"2026-03-10T08:36:15.387844+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T09:51:12.897388+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"4.f","version":"54'15","reported_seq":48,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.892324+0000","last_change":"2026-03-10T08:36:14.415127+0000","last_active":"2026-03-10T08:36:21.892324+0000","last_peered":"2026-03-10T08:36:21.892324+0000","last_clean":"2026-03-10T08:36:21.892324+0000","last_became_active":"2026-03-10T08:36:14.415016+0000","last_became_peered":"2026-03-10T08:36:14.415016+0000","last_unstale":"2026-03-10T08:36:21.892324+0000","last_undegraded":"2026-03-10T08:36:21.892324+0000","last_fullsized":"2026-03-10T08:36:21.892324+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_clean_scrub_stamp":"2026-03-10T08:36:13.382053+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:37:00.346418+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,3,4],"acting":[1,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.8","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.886622+0000","last_change":"2026-03-10T08:36:12.423503+0000","last_active":"2026-03-10T08:36:21.886622+0000","last_peered":"2026-03-10T08:36:21.886622+0000","last_clean":"2026-03-10T08:36:21.886622+0000","last_became_active":"2026-03-10T08:36:12.421929+0000","last_became_peered":"2026-03-10T08:36:12.421929+0000","last_unstale":"2026-03-10T08:36:21.886622+0000","last_undegraded":"2026-03-10T08:36:21.886622+0000","last_fullsized":"2026-03-10T08:36:21.886622+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:15:35.769167+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,7],"acting":[3,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.e","version":"54'8","reported_seq":29,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.404156+0000","last_change":"2026-03-10T08:36:16.440418+0000","last_active":"2026-03-10T08:36:21.404156+0000","last_peered":"2026-03-10T08:36:21.404156+0000","last_clean":"2026-03-10T08:36:21.404156+0000","last_became_active":"2026-03-10T08:36:16.440320+0000","last_became_peered":"2026-03-10T08:36:16.440320+0000","last_unstale":"2026-03-10T08:36:21.404156+0000","last_undegraded":"2026-03-10T08:36:21.404156+0000","last_fullsized":"2026-03-10T08:36:21.404156+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_clean_scrub_stamp":"2026-03-10T08:36:15.387844+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:35:15.855610+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,0],"acting":[4,5,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"6.d","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.393359+0000","last_change":"2026-03-10T08:36:18.440872+0000","last_active":"2026-03-10T08:36:21.393359+0000","last_peered":"2026-03-10T08:36:21.393359+0000","last_clean":"2026-03-10T08:36:21.393359+0000","last_became_active":"2026-03-10T08:36:18.440734+0000","last_became_peered":"2026-03-10T08:36:18.440734+0000","last_unstale":"2026-03-10T08:36:21.393359+0000","last_undegraded":"2026-03-10T08:36:21.393359+0000","last_fullsized":"2026-03-10T08:36:21.393359+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_clean_scrub_stamp":"2026-03-10T08:36:17.394330+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:07:57.062169+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"4.0","version":"54'18","reported_seq":55,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.886766+0000","last_change":"2026-03-10T08:36:14.475055+0000","last_active":"2026-03-10T08:36:21.886766+0000","last_peered":"2026-03-10T08:36:21.886766+0000","last_clean":"2026-03-10T08:36:21.886766+0000","last_became_active":"2026-03-10T08:36:14.474922+0000","last_became_peered":"2026-03-10T08:36:14.474922+0000","last_unstale":"2026-03-10T08:36:21.886766+0000","last_undegraded":"2026-03-10T08:36:21.886766+0000","last_fullsized":"2026-03-10T08:36:21.886766+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_clean_scrub_stamp":"2026-03-10T08:36:13.382053+0000","objects_scrubbed":0,"log_size":18,"log_dups_size":0,"ondisk_log_size":18,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:56:13.952057+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":34,"num_read_kb":22,"num_write":20,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,0],"acting":[3,7,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.7","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.886741+0000","last_change":"2026-03-10T08:36:12.424745+0000","last_active":"2026-03-10T08:36:21.886741+0000","last_peered":"2026-03-10T08:36:21.886741+0000","last_clean":"2026-03-10T08:36:21.886741+0000","last_became_active":"2026-03-10T08:36:12.424617+0000","last_became_peered":"2026-03-10T08:36:12.424617+0000","last_unstale":"2026-03-10T08:36:21.886741+0000","last_undegraded":"2026-03-10T08:36:21.886741+0000","last_fullsized":"2026-03-10T08:36:21.886741+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:27:57.626635+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,0],"acting":[3,7,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.1","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.404156+0000","last_change":"2026-03-10T08:36:16.431977+0000","last_active":"2026-03-10T08:36:21.404156+0000","last_peered":"2026-03-10T08:36:21.404156+0000","last_clean":"2026-03-10T08:36:21.404156+0000","last_became_active":"2026-03-10T08:36:16.431764+0000","last_became_peered":"2026-03-10T08:36:16.431764+0000","last_unstale":"2026-03-10T08:36:21.404156+0000","last_undegraded":"2026-03-10T08:36:21.404156+0000","last_fullsized":"2026-03-10T08:36:21.404156+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_clean_scrub_stamp":"2026-03-10T08:36:15.387844+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:23:10.636521+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,7],"acting":[4,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"6.2","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.404480+0000","last_change":"2026-03-10T08:36:18.433176+0000","last_active":"2026-03-10T08:36:21.404480+0000","last_peered":"2026-03-10T08:36:21.404480+0000","last_clean":"2026-03-10T08:36:21.404480+0000","last_became_active":"2026-03-10T08:36:18.433068+0000","last_became_peered":"2026-03-10T08:36:18.433068+0000","last_unstale":"2026-03-10T08:36:21.404480+0000","last_undegraded":"2026-03-10T08:36:21.404480+0000","last_fullsized":"2026-03-10T08:36:21.404480+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_clean_scrub_stamp":"2026-03-10T08:36:17.394330+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:00:55.564867+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,2],"acting":[4,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"4.1","version":"54'14","reported_seq":44,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.404922+0000","last_change":"2026-03-10T08:36:14.404265+0000","last_active":"2026-03-10T08:36:21.404922+0000","last_peered":"2026-03-10T08:36:21.404922+0000","last_clean":"2026-03-10T08:36:21.404922+0000","last_became_active":"2026-03-10T08:36:14.403568+0000","last_became_peered":"2026-03-10T08:36:14.403568+0000","last_unstale":"2026-03-10T08:36:21.404922+0000","last_undegraded":"2026-03-10T08:36:21.404922+0000","last_fullsized":"2026-03-10T08:36:21.404922+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_clean_scrub_stamp":"2026-03-10T08:36:13.382053+0000","objects_scrubbed":0,"log_size":14,"log_dups_size":0,"ondisk_log_size":14,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:12:17.520071+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":21,"num_read_kb":14,"num_write":14,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,6],"acting":[4,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.6","version":"47'1","reported_seq":28,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.880206+0000","last_change":"2026-03-10T08:36:12.416793+0000","last_active":"2026-03-10T08:36:21.880206+0000","last_peered":"2026-03-10T08:36:21.880206+0000","last_clean":"2026-03-10T08:36:21.880206+0000","last_became_active":"2026-03-10T08:36:12.416718+0000","last_became_peered":"2026-03-10T08:36:12.416718+0000","last_unstale":"2026-03-10T08:36:21.880206+0000","last_undegraded":"2026-03-10T08:36:21.880206+0000","last_fullsized":"2026-03-10T08:36:21.880206+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:32:22.954897+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":46,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,4],"acting":[0,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.0","version":"54'8","reported_seq":29,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.886061+0000","last_change":"2026-03-10T08:36:16.426703+0000","last_active":"2026-03-10T08:36:21.886061+0000","last_peered":"2026-03-10T08:36:21.886061+0000","last_clean":"2026-03-10T08:36:21.886061+0000","last_became_active":"2026-03-10T08:36:16.426604+0000","last_became_peered":"2026-03-10T08:36:16.426604+0000","last_unstale":"2026-03-10T08:36:21.886061+0000","last_undegraded":"2026-03-10T08:36:21.886061+0000","last_fullsized":"2026-03-10T08:36:21.886061+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_clean_scrub_stamp":"2026-03-10T08:36:15.387844+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:49:47.407593+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,4],"acting":[3,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.3","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.398896+0000","last_change":"2026-03-10T08:36:18.449523+0000","last_active":"2026-03-10T08:36:21.398896+0000","last_peered":"2026-03-10T08:36:21.398896+0000","last_clean":"2026-03-10T08:36:21.398896+0000","last_became_active":"2026-03-10T08:36:18.437774+0000","last_became_peered":"2026-03-10T08:36:18.437774+0000","last_unstale":"2026-03-10T08:36:21.398896+0000","last_undegraded":"2026-03-10T08:36:21.398896+0000","last_fullsized":"2026-03-10T08:36:21.398896+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_clean_scrub_stamp":"2026-03-10T08:36:17.394330+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:53:29.874022+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,2],"acting":[7,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"4.2","version":"54'10","reported_seq":38,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.893369+0000","last_change":"2026-03-10T08:36:14.409167+0000","last_active":"2026-03-10T08:36:21.893369+0000","last_peered":"2026-03-10T08:36:21.893369+0000","last_clean":"2026-03-10T08:36:21.893369+0000","last_became_active":"2026-03-10T08:36:14.409028+0000","last_became_peered":"2026-03-10T08:36:14.409028+0000","last_unstale":"2026-03-10T08:36:21.893369+0000","last_undegraded":"2026-03-10T08:36:21.893369+0000","last_fullsized":"2026-03-10T08:36:21.893369+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_clean_scrub_stamp":"2026-03-10T08:36:13.382053+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:27:04.902781+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,4],"acting":[1,5,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.5","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.394058+0000","last_change":"2026-03-10T08:36:12.413156+0000","last_active":"2026-03-10T08:36:21.394058+0000","last_peered":"2026-03-10T08:36:21.394058+0000","last_clean":"2026-03-10T08:36:21.394058+0000","last_became_active":"2026-03-10T08:36:12.412898+0000","last_became_peered":"2026-03-10T08:36:12.412898+0000","last_unstale":"2026-03-10T08:36:21.394058+0000","last_undegraded":"2026-03-10T08:36:21.394058+0000","last_fullsized":"2026-03-10T08:36:21.394058+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:54:54.660675+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,2],"acting":[5,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.3","version":"54'8","reported_seq":29,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.880907+0000","last_change":"2026-03-10T08:36:16.429943+0000","last_active":"2026-03-10T08:36:21.880907+0000","last_peered":"2026-03-10T08:36:21.880907+0000","last_clean":"2026-03-10T08:36:21.880907+0000","last_became_active":"2026-03-10T08:36:16.429841+0000","last_became_peered":"2026-03-10T08:36:16.429841+0000","last_unstale":"2026-03-10T08:36:21.880907+0000","last_undegraded":"2026-03-10T08:36:21.880907+0000","last_fullsized":"2026-03-10T08:36:21.880907+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_clean_scrub_stamp":"2026-03-10T08:36:15.387844+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:07:24.520669+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,6,5],"acting":[0,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"6.0","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.880883+0000","last_change":"2026-03-10T08:36:18.421985+0000","last_active":"2026-03-10T08:36:21.880883+0000","last_peered":"2026-03-10T08:36:21.880883+0000","last_clean":"2026-03-10T08:36:21.880883+0000","last_became_active":"2026-03-10T08:36:18.421916+0000","last_became_peered":"2026-03-10T08:36:18.421916+0000","last_unstale":"2026-03-10T08:36:21.880883+0000","last_undegraded":"2026-03-10T08:36:21.880883+0000","last_fullsized":"2026-03-10T08:36:21.880883+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_clean_scrub_stamp":"2026-03-10T08:36:17.394330+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:11:14.770808+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,3,2],"acting":[0,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"4.3","version":"54'19","reported_seq":59,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.880626+0000","last_change":"2026-03-10T08:36:14.477300+0000","last_active":"2026-03-10T08:36:21.880626+0000","last_peered":"2026-03-10T08:36:21.880626+0000","last_clean":"2026-03-10T08:36:21.880626+0000","last_became_active":"2026-03-10T08:36:14.477175+0000","last_became_peered":"2026-03-10T08:36:14.477175+0000","last_unstale":"2026-03-10T08:36:21.880626+0000","last_undegraded":"2026-03-10T08:36:21.880626+0000","last_fullsized":"2026-03-10T08:36:21.880626+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_clean_scrub_stamp":"2026-03-10T08:36:13.382053+0000","objects_scrubbed":0,"log_size":19,"log_dups_size":0,"ondisk_log_size":19,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:30:17.354498+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":330,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":39,"num_read_kb":25,"num_write":22,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,7],"acting":[0,5,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.4","version":"47'1","reported_seq":33,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.892753+0000","last_change":"2026-03-10T08:36:12.406429+0000","last_active":"2026-03-10T08:36:21.892753+0000","last_peered":"2026-03-10T08:36:21.892753+0000","last_clean":"2026-03-10T08:36:21.892753+0000","last_became_active":"2026-03-10T08:36:12.406085+0000","last_became_peered":"2026-03-10T08:36:12.406085+0000","last_unstale":"2026-03-10T08:36:21.892753+0000","last_undegraded":"2026-03-10T08:36:21.892753+0000","last_fullsized":"2026-03-10T08:36:21.892753+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:29:15.068080+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":436,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":7,"num_read_kb":7,"num_write":2,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2,5],"acting":[1,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.2","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.453127+0000","last_change":"2026-03-10T08:36:16.426755+0000","last_active":"2026-03-10T08:36:21.453127+0000","last_peered":"2026-03-10T08:36:21.453127+0000","last_clean":"2026-03-10T08:36:21.453127+0000","last_became_active":"2026-03-10T08:36:16.422838+0000","last_became_peered":"2026-03-10T08:36:16.422838+0000","last_unstale":"2026-03-10T08:36:21.453127+0000","last_undegraded":"2026-03-10T08:36:21.453127+0000","last_fullsized":"2026-03-10T08:36:21.453127+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_clean_scrub_stamp":"2026-03-10T08:36:15.387844+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:43:39.844050+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,0,5],"acting":[6,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"6.1","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.892728+0000","last_change":"2026-03-10T08:36:18.421116+0000","last_active":"2026-03-10T08:36:21.892728+0000","last_peered":"2026-03-10T08:36:21.892728+0000","last_clean":"2026-03-10T08:36:21.892728+0000","last_became_active":"2026-03-10T08:36:18.420688+0000","last_became_peered":"2026-03-10T08:36:18.420688+0000","last_unstale":"2026-03-10T08:36:21.892728+0000","last_undegraded":"2026-03-10T08:36:21.892728+0000","last_fullsized":"2026-03-10T08:36:21.892728+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_clean_scrub_stamp":"2026-03-10T08:36:17.394330+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:14:05.072286+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,6,2],"acting":[1,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"4.4","version":"54'28","reported_seq":74,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.892681+0000","last_change":"2026-03-10T08:36:14.415292+0000","last_active":"2026-03-10T08:36:21.892681+0000","last_peered":"2026-03-10T08:36:21.892681+0000","last_clean":"2026-03-10T08:36:21.892681+0000","last_became_active":"2026-03-10T08:36:14.415201+0000","last_became_peered":"2026-03-10T08:36:14.415201+0000","last_unstale":"2026-03-10T08:36:21.892681+0000","last_undegraded":"2026-03-10T08:36:21.892681+0000","last_fullsized":"2026-03-10T08:36:21.892681+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_clean_scrub_stamp":"2026-03-10T08:36:13.382053+0000","objects_scrubbed":0,"log_size":28,"log_dups_size":0,"ondisk_log_size":28,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T09:33:51.775265+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":358,"num_objects":10,"num_object_clones":0,"num_object_copies":30,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":10,"num_whiteouts":0,"num_read":48,"num_read_kb":33,"num_write":26,"num_write_kb":4,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2,3],"acting":[1,2,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.3","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.404597+0000","last_change":"2026-03-10T08:36:12.410926+0000","last_active":"2026-03-10T08:36:21.404597+0000","last_peered":"2026-03-10T08:36:21.404597+0000","last_clean":"2026-03-10T08:36:21.404597+0000","last_became_active":"2026-03-10T08:36:12.410423+0000","last_became_peered":"2026-03-10T08:36:12.410423+0000","last_unstale":"2026-03-10T08:36:21.404597+0000","last_undegraded":"2026-03-10T08:36:21.404597+0000","last_fullsized":"2026-03-10T08:36:21.404597+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:02:10.204754+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,0,6],"acting":[4,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.2","version":"49'2","reported_seq":34,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.394147+0000","last_change":"2026-03-10T08:36:14.393754+0000","last_active":"2026-03-10T08:36:21.394147+0000","last_peered":"2026-03-10T08:36:21.394147+0000","last_clean":"2026-03-10T08:36:21.394147+0000","last_became_active":"2026-03-10T08:36:12.406162+0000","last_became_peered":"2026-03-10T08:36:12.406162+0000","last_unstale":"2026-03-10T08:36:21.394147+0000","last_undegraded":"2026-03-10T08:36:21.394147+0000","last_fullsized":"2026-03-10T08:36:21.394147+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T09:04:51.974136+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00041008700000000001,"stat_sum":{"num_bytes":19,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,6],"acting":[5,1,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.5","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.880458+0000","last_change":"2026-03-10T08:36:16.425935+0000","last_active":"2026-03-10T08:36:21.880458+0000","last_peered":"2026-03-10T08:36:21.880458+0000","last_clean":"2026-03-10T08:36:21.880458+0000","last_became_active":"2026-03-10T08:36:16.425847+0000","last_became_peered":"2026-03-10T08:36:16.425847+0000","last_unstale":"2026-03-10T08:36:21.880458+0000","last_undegraded":"2026-03-10T08:36:21.880458+0000","last_fullsized":"2026-03-10T08:36:21.880458+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_clean_scrub_stamp":"2026-03-10T08:36:15.387844+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:27:44.252713+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,4],"acting":[0,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"6.6","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.886089+0000","last_change":"2026-03-10T08:36:18.453202+0000","last_active":"2026-03-10T08:36:21.886089+0000","last_peered":"2026-03-10T08:36:21.886089+0000","last_clean":"2026-03-10T08:36:21.886089+0000","last_became_active":"2026-03-10T08:36:18.453111+0000","last_became_peered":"2026-03-10T08:36:18.453111+0000","last_unstale":"2026-03-10T08:36:21.886089+0000","last_undegraded":"2026-03-10T08:36:21.886089+0000","last_fullsized":"2026-03-10T08:36:21.886089+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_clean_scrub_stamp":"2026-03-10T08:36:17.394330+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:53:48.051855+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,4,7],"acting":[3,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"4.7","version":"54'13","reported_seq":50,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.892780+0000","last_change":"2026-03-10T08:36:14.423735+0000","last_active":"2026-03-10T08:36:21.892780+0000","last_peered":"2026-03-10T08:36:21.892780+0000","last_clean":"2026-03-10T08:36:21.892780+0000","last_became_active":"2026-03-10T08:36:14.423167+0000","last_became_peered":"2026-03-10T08:36:14.423167+0000","last_unstale":"2026-03-10T08:36:21.892780+0000","last_undegraded":"2026-03-10T08:36:21.892780+0000","last_fullsized":"2026-03-10T08:36:21.892780+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_clean_scrub_stamp":"2026-03-10T08:36:13.382053+0000","objects_scrubbed":0,"log_size":13,"log_dups_size":0,"ondisk_log_size":13,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:42:20.318241+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":330,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":30,"num_read_kb":19,"num_write":16,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,0],"acting":[1,5,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.0","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.892807+0000","last_change":"2026-03-10T08:36:12.408080+0000","last_active":"2026-03-10T08:36:21.892807+0000","last_peered":"2026-03-10T08:36:21.892807+0000","last_clean":"2026-03-10T08:36:21.892807+0000","last_became_active":"2026-03-10T08:36:12.407879+0000","last_became_peered":"2026-03-10T08:36:12.407879+0000","last_unstale":"2026-03-10T08:36:21.892807+0000","last_undegraded":"2026-03-10T08:36:21.892807+0000","last_fullsized":"2026-03-10T08:36:21.892807+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:02:43.812546+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2,6],"acting":[1,2,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.1","version":"47'1","reported_seq":33,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.884008+0000","last_change":"2026-03-10T08:36:14.392483+0000","last_active":"2026-03-10T08:36:21.884008+0000","last_peered":"2026-03-10T08:36:21.884008+0000","last_clean":"2026-03-10T08:36:21.884008+0000","last_became_active":"2026-03-10T08:36:12.412540+0000","last_became_peered":"2026-03-10T08:36:12.412540+0000","last_unstale":"2026-03-10T08:36:21.884008+0000","last_undegraded":"2026-03-10T08:36:21.884008+0000","last_fullsized":"2026-03-10T08:36:21.884008+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:05:05.905712+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.000361837,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,3,0],"acting":[2,3,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.6","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.884071+0000","last_change":"2026-03-10T08:36:16.421439+0000","last_active":"2026-03-10T08:36:21.884071+0000","last_peered":"2026-03-10T08:36:21.884071+0000","last_clean":"2026-03-10T08:36:21.884071+0000","last_became_active":"2026-03-10T08:36:16.421242+0000","last_became_peered":"2026-03-10T08:36:16.421242+0000","last_unstale":"2026-03-10T08:36:21.884071+0000","last_undegraded":"2026-03-10T08:36:21.884071+0000","last_fullsized":"2026-03-10T08:36:21.884071+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_clean_scrub_stamp":"2026-03-10T08:36:15.387844+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:00:23.387729+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,5,7],"acting":[2,5,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.5","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.399063+0000","last_change":"2026-03-10T08:36:18.437540+0000","last_active":"2026-03-10T08:36:21.399063+0000","last_peered":"2026-03-10T08:36:21.399063+0000","last_clean":"2026-03-10T08:36:21.399063+0000","last_became_active":"2026-03-10T08:36:18.428400+0000","last_became_peered":"2026-03-10T08:36:18.428400+0000","last_unstale":"2026-03-10T08:36:21.399063+0000","last_undegraded":"2026-03-10T08:36:21.399063+0000","last_fullsized":"2026-03-10T08:36:21.399063+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_clean_scrub_stamp":"2026-03-10T08:36:17.394330+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:11:41.308195+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,3],"acting":[7,6,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"4.6","version":"54'12","reported_seq":41,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.880278+0000","last_change":"2026-03-10T08:36:14.417982+0000","last_active":"2026-03-10T08:36:21.880278+0000","last_peered":"2026-03-10T08:36:21.880278+0000","last_clean":"2026-03-10T08:36:21.880278+0000","last_became_active":"2026-03-10T08:36:14.417677+0000","last_became_peered":"2026-03-10T08:36:14.417677+0000","last_unstale":"2026-03-10T08:36:21.880278+0000","last_undegraded":"2026-03-10T08:36:21.880278+0000","last_fullsized":"2026-03-10T08:36:21.880278+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_clean_scrub_stamp":"2026-03-10T08:36:13.382053+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:59:30.405880+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":6,"num_object_clones":0,"num_object_copies":18,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":6,"num_whiteouts":0,"num_read":18,"num_read_kb":12,"num_write":12,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,2],"acting":[0,1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.1","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.880252+0000","last_change":"2026-03-10T08:36:12.405451+0000","last_active":"2026-03-10T08:36:21.880252+0000","last_peered":"2026-03-10T08:36:21.880252+0000","last_clean":"2026-03-10T08:36:21.880252+0000","last_became_active":"2026-03-10T08:36:12.405128+0000","last_became_peered":"2026-03-10T08:36:12.405128+0000","last_unstale":"2026-03-10T08:36:21.880252+0000","last_undegraded":"2026-03-10T08:36:21.880252+0000","last_fullsized":"2026-03-10T08:36:21.880252+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T09:46:16.990290+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,4,3],"acting":[0,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"2.0","version":"54'5","reported_seq":41,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.399553+0000","last_change":"2026-03-10T08:36:14.474567+0000","last_active":"2026-03-10T08:36:21.399553+0000","last_peered":"2026-03-10T08:36:21.399553+0000","last_clean":"2026-03-10T08:36:21.399553+0000","last_became_active":"2026-03-10T08:36:12.408655+0000","last_became_peered":"2026-03-10T08:36:12.408655+0000","last_unstale":"2026-03-10T08:36:21.399553+0000","last_undegraded":"2026-03-10T08:36:21.399553+0000","last_fullsized":"2026-03-10T08:36:21.399553+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":5,"log_dups_size":0,"ondisk_log_size":5,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:54:32.220383+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.0039831160000000001,"stat_sum":{"num_bytes":389,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":6,"num_read_kb":1,"num_write":4,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,1,0],"acting":[7,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.7","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.393825+0000","last_change":"2026-03-10T08:36:16.416970+0000","last_active":"2026-03-10T08:36:21.393825+0000","last_peered":"2026-03-10T08:36:21.393825+0000","last_clean":"2026-03-10T08:36:21.393825+0000","last_became_active":"2026-03-10T08:36:16.416699+0000","last_became_peered":"2026-03-10T08:36:16.416699+0000","last_unstale":"2026-03-10T08:36:21.393825+0000","last_undegraded":"2026-03-10T08:36:21.393825+0000","last_fullsized":"2026-03-10T08:36:21.393825+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_clean_scrub_stamp":"2026-03-10T08:36:15.387844+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:12:03.436428+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.4","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.892476+0000","last_change":"2026-03-10T08:36:18.421216+0000","last_active":"2026-03-10T08:36:21.892476+0000","last_peered":"2026-03-10T08:36:21.892476+0000","last_clean":"2026-03-10T08:36:21.892476+0000","last_became_active":"2026-03-10T08:36:18.421044+0000","last_became_peered":"2026-03-10T08:36:18.421044+0000","last_unstale":"2026-03-10T08:36:21.892476+0000","last_undegraded":"2026-03-10T08:36:21.892476+0000","last_fullsized":"2026-03-10T08:36:21.892476+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_clean_scrub_stamp":"2026-03-10T08:36:17.394330+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:50:29.875735+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,3],"acting":[1,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"4.5","version":"54'16","reported_seq":48,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.452247+0000","last_change":"2026-03-10T08:36:14.471730+0000","last_active":"2026-03-10T08:36:21.452247+0000","last_peered":"2026-03-10T08:36:21.452247+0000","last_clean":"2026-03-10T08:36:21.452247+0000","last_became_active":"2026-03-10T08:36:14.471564+0000","last_became_peered":"2026-03-10T08:36:14.471564+0000","last_unstale":"2026-03-10T08:36:21.452247+0000","last_undegraded":"2026-03-10T08:36:21.452247+0000","last_fullsized":"2026-03-10T08:36:21.452247+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_clean_scrub_stamp":"2026-03-10T08:36:13.382053+0000","objects_scrubbed":0,"log_size":16,"log_dups_size":0,"ondisk_log_size":16,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:04:39.245478+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":154,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":25,"num_read_kb":15,"num_write":13,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"3.2","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.886831+0000","last_change":"2026-03-10T08:36:12.412829+0000","last_active":"2026-03-10T08:36:21.886831+0000","last_peered":"2026-03-10T08:36:21.886831+0000","last_clean":"2026-03-10T08:36:21.886831+0000","last_became_active":"2026-03-10T08:36:12.412635+0000","last_became_peered":"2026-03-10T08:36:12.412635+0000","last_unstale":"2026-03-10T08:36:21.886831+0000","last_undegraded":"2026-03-10T08:36:21.886831+0000","last_fullsized":"2026-03-10T08:36:21.886831+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:23:44.228885+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,5,6],"acting":[3,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"1.0","version":"18'32","reported_seq":37,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.402810+0000","last_change":"2026-03-10T08:36:10.389035+0000","last_active":"2026-03-10T08:36:21.402810+0000","last_peered":"2026-03-10T08:36:21.402810+0000","last_clean":"2026-03-10T08:36:21.402810+0000","last_became_active":"2026-03-10T08:36:10.382291+0000","last_became_peered":"2026-03-10T08:36:10.382291+0000","last_unstale":"2026-03-10T08:36:21.402810+0000","last_undegraded":"2026-03-10T08:36:21.402810+0000","last_fullsized":"2026-03-10T08:36:21.402810+0000","mapping_epoch":44,"log_start":"0'0","ondisk_log_start":"0'0","created":17,"last_epoch_clean":45,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:35:15.259757+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:35:15.259757+0000","last_clean_scrub_stamp":"2026-03-10T08:35:15.259757+0000","objects_scrubbed":0,"log_size":32,"log_dups_size":0,"ondisk_log_size":32,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:10:56.579708+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0,6],"acting":[7,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.4","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.402831+0000","last_change":"2026-03-10T08:36:16.438526+0000","last_active":"2026-03-10T08:36:21.402831+0000","last_peered":"2026-03-10T08:36:21.402831+0000","last_clean":"2026-03-10T08:36:21.402831+0000","last_became_active":"2026-03-10T08:36:16.438375+0000","last_became_peered":"2026-03-10T08:36:16.438375+0000","last_unstale":"2026-03-10T08:36:21.402831+0000","last_undegraded":"2026-03-10T08:36:21.402831+0000","last_fullsized":"2026-03-10T08:36:21.402831+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_clean_scrub_stamp":"2026-03-10T08:36:15.387844+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:00:48.852407+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,5],"acting":[7,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.7","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.393470+0000","last_change":"2026-03-10T08:36:18.446681+0000","last_active":"2026-03-10T08:36:21.393470+0000","last_peered":"2026-03-10T08:36:21.393470+0000","last_clean":"2026-03-10T08:36:21.393470+0000","last_became_active":"2026-03-10T08:36:18.446574+0000","last_became_peered":"2026-03-10T08:36:18.446574+0000","last_unstale":"2026-03-10T08:36:21.393470+0000","last_undegraded":"2026-03-10T08:36:21.393470+0000","last_fullsized":"2026-03-10T08:36:21.393470+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_clean_scrub_stamp":"2026-03-10T08:36:17.394330+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:39:15.005470+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,4],"acting":[5,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"4.e","version":"54'11","reported_seq":42,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.404523+0000","last_change":"2026-03-10T08:36:14.411679+0000","last_active":"2026-03-10T08:36:21.404523+0000","last_peered":"2026-03-10T08:36:21.404523+0000","last_clean":"2026-03-10T08:36:21.404523+0000","last_became_active":"2026-03-10T08:36:14.411268+0000","last_became_peered":"2026-03-10T08:36:14.411268+0000","last_unstale":"2026-03-10T08:36:21.404523+0000","last_undegraded":"2026-03-10T08:36:21.404523+0000","last_fullsized":"2026-03-10T08:36:21.404523+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_clean_scrub_stamp":"2026-03-10T08:36:13.382053+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:34:20.224472+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,1],"acting":[4,6,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.9","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.404498+0000","last_change":"2026-03-10T08:36:12.404771+0000","last_active":"2026-03-10T08:36:21.404498+0000","last_peered":"2026-03-10T08:36:21.404498+0000","last_clean":"2026-03-10T08:36:21.404498+0000","last_became_active":"2026-03-10T08:36:12.404449+0000","last_became_peered":"2026-03-10T08:36:12.404449+0000","last_unstale":"2026-03-10T08:36:21.404498+0000","last_undegraded":"2026-03-10T08:36:21.404498+0000","last_fullsized":"2026-03-10T08:36:21.404498+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:24:33.726957+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,2,7],"acting":[4,2,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.f","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.394471+0000","last_change":"2026-03-10T08:36:16.434801+0000","last_active":"2026-03-10T08:36:21.394471+0000","last_peered":"2026-03-10T08:36:21.394471+0000","last_clean":"2026-03-10T08:36:21.394471+0000","last_became_active":"2026-03-10T08:36:16.434725+0000","last_became_peered":"2026-03-10T08:36:16.434725+0000","last_unstale":"2026-03-10T08:36:21.394471+0000","last_undegraded":"2026-03-10T08:36:21.394471+0000","last_fullsized":"2026-03-10T08:36:21.394471+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_clean_scrub_stamp":"2026-03-10T08:36:15.387844+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:19:51.741035+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,6],"acting":[5,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.c","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.885985+0000","last_change":"2026-03-10T08:36:18.428728+0000","last_active":"2026-03-10T08:36:21.885985+0000","last_peered":"2026-03-10T08:36:21.885985+0000","last_clean":"2026-03-10T08:36:21.885985+0000","last_became_active":"2026-03-10T08:36:18.428599+0000","last_became_peered":"2026-03-10T08:36:18.428599+0000","last_unstale":"2026-03-10T08:36:21.885985+0000","last_undegraded":"2026-03-10T08:36:21.885985+0000","last_fullsized":"2026-03-10T08:36:21.885985+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_clean_scrub_stamp":"2026-03-10T08:36:17.394330+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:08:50.956076+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,5],"acting":[3,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"4.d","version":"54'17","reported_seq":51,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.406399+0000","last_change":"2026-03-10T08:36:14.413530+0000","last_active":"2026-03-10T08:36:21.406399+0000","last_peered":"2026-03-10T08:36:21.406399+0000","last_clean":"2026-03-10T08:36:21.406399+0000","last_became_active":"2026-03-10T08:36:14.413376+0000","last_became_peered":"2026-03-10T08:36:14.413376+0000","last_unstale":"2026-03-10T08:36:21.406399+0000","last_undegraded":"2026-03-10T08:36:21.406399+0000","last_fullsized":"2026-03-10T08:36:21.406399+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_clean_scrub_stamp":"2026-03-10T08:36:13.382053+0000","objects_scrubbed":0,"log_size":17,"log_dups_size":0,"ondisk_log_size":17,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:11:24.816751+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":29,"num_read_kb":19,"num_write":18,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,2,1],"acting":[4,2,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.a","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.452590+0000","last_change":"2026-03-10T08:36:12.416243+0000","last_active":"2026-03-10T08:36:21.452590+0000","last_peered":"2026-03-10T08:36:21.452590+0000","last_clean":"2026-03-10T08:36:21.452590+0000","last_became_active":"2026-03-10T08:36:12.416152+0000","last_became_peered":"2026-03-10T08:36:12.416152+0000","last_unstale":"2026-03-10T08:36:21.452590+0000","last_undegraded":"2026-03-10T08:36:21.452590+0000","last_fullsized":"2026-03-10T08:36:21.452590+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:19:48.989936+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,1],"acting":[6,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"5.c","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.892186+0000","last_change":"2026-03-10T08:36:16.427876+0000","last_active":"2026-03-10T08:36:21.892186+0000","last_peered":"2026-03-10T08:36:21.892186+0000","last_clean":"2026-03-10T08:36:21.892186+0000","last_became_active":"2026-03-10T08:36:16.427772+0000","last_became_peered":"2026-03-10T08:36:16.427772+0000","last_unstale":"2026-03-10T08:36:21.892186+0000","last_undegraded":"2026-03-10T08:36:21.892186+0000","last_fullsized":"2026-03-10T08:36:21.892186+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_clean_scrub_stamp":"2026-03-10T08:36:15.387844+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:20:25.396468+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,4,0],"acting":[1,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.f","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.884133+0000","last_change":"2026-03-10T08:36:18.433698+0000","last_active":"2026-03-10T08:36:21.884133+0000","last_peered":"2026-03-10T08:36:21.884133+0000","last_clean":"2026-03-10T08:36:21.884133+0000","last_became_active":"2026-03-10T08:36:18.433591+0000","last_became_peered":"2026-03-10T08:36:18.433591+0000","last_unstale":"2026-03-10T08:36:21.884133+0000","last_undegraded":"2026-03-10T08:36:21.884133+0000","last_fullsized":"2026-03-10T08:36:21.884133+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_clean_scrub_stamp":"2026-03-10T08:36:17.394330+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:56:51.011977+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,3,4],"acting":[2,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"4.c","version":"54'10","reported_seq":38,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.406299+0000","last_change":"2026-03-10T08:36:14.409168+0000","last_active":"2026-03-10T08:36:21.406299+0000","last_peered":"2026-03-10T08:36:21.406299+0000","last_clean":"2026-03-10T08:36:21.406299+0000","last_became_active":"2026-03-10T08:36:14.408855+0000","last_became_peered":"2026-03-10T08:36:14.408855+0000","last_unstale":"2026-03-10T08:36:21.406299+0000","last_undegraded":"2026-03-10T08:36:21.406299+0000","last_fullsized":"2026-03-10T08:36:21.406299+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_clean_scrub_stamp":"2026-03-10T08:36:13.382053+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:04:03.612726+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,6],"acting":[4,3,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.b","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.886581+0000","last_change":"2026-03-10T08:36:12.423383+0000","last_active":"2026-03-10T08:36:21.886581+0000","last_peered":"2026-03-10T08:36:21.886581+0000","last_clean":"2026-03-10T08:36:21.886581+0000","last_became_active":"2026-03-10T08:36:12.420213+0000","last_became_peered":"2026-03-10T08:36:12.420213+0000","last_unstale":"2026-03-10T08:36:21.886581+0000","last_undegraded":"2026-03-10T08:36:21.886581+0000","last_fullsized":"2026-03-10T08:36:21.886581+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:31:54.743516+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,4],"acting":[3,0,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.d","version":"54'8","reported_seq":33,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.884311+0000","last_change":"2026-03-10T08:36:16.421500+0000","last_active":"2026-03-10T08:36:21.884311+0000","last_peered":"2026-03-10T08:36:21.884311+0000","last_clean":"2026-03-10T08:36:21.884311+0000","last_became_active":"2026-03-10T08:36:16.421361+0000","last_became_peered":"2026-03-10T08:36:16.421361+0000","last_unstale":"2026-03-10T08:36:21.884311+0000","last_undegraded":"2026-03-10T08:36:21.884311+0000","last_fullsized":"2026-03-10T08:36:21.884311+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_clean_scrub_stamp":"2026-03-10T08:36:15.387844+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:36:52.041137+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,7,5],"acting":[2,7,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.e","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.406252+0000","last_change":"2026-03-10T08:36:18.434947+0000","last_active":"2026-03-10T08:36:21.406252+0000","last_peered":"2026-03-10T08:36:21.406252+0000","last_clean":"2026-03-10T08:36:21.406252+0000","last_became_active":"2026-03-10T08:36:18.434860+0000","last_became_peered":"2026-03-10T08:36:18.434860+0000","last_unstale":"2026-03-10T08:36:21.406252+0000","last_undegraded":"2026-03-10T08:36:21.406252+0000","last_fullsized":"2026-03-10T08:36:21.406252+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_clean_scrub_stamp":"2026-03-10T08:36:17.394330+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T09:05:28.158200+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,1,2],"acting":[4,1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"4.b","version":"54'9","reported_seq":39,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.880695+0000","last_change":"2026-03-10T08:36:14.417922+0000","last_active":"2026-03-10T08:36:21.880695+0000","last_peered":"2026-03-10T08:36:21.880695+0000","last_clean":"2026-03-10T08:36:21.880695+0000","last_became_active":"2026-03-10T08:36:14.417566+0000","last_became_peered":"2026-03-10T08:36:14.417566+0000","last_unstale":"2026-03-10T08:36:21.880695+0000","last_undegraded":"2026-03-10T08:36:21.880695+0000","last_fullsized":"2026-03-10T08:36:21.880695+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_clean_scrub_stamp":"2026-03-10T08:36:13.382053+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:05:04.758549+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,4],"acting":[0,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.c","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.393957+0000","last_change":"2026-03-10T08:36:12.413266+0000","last_active":"2026-03-10T08:36:21.393957+0000","last_peered":"2026-03-10T08:36:21.393957+0000","last_clean":"2026-03-10T08:36:21.393957+0000","last_became_active":"2026-03-10T08:36:12.413050+0000","last_became_peered":"2026-03-10T08:36:12.413050+0000","last_unstale":"2026-03-10T08:36:21.393957+0000","last_undegraded":"2026-03-10T08:36:21.393957+0000","last_fullsized":"2026-03-10T08:36:21.393957+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:56:50.949792+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,6],"acting":[5,3,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.a","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.884217+0000","last_change":"2026-03-10T08:36:16.426219+0000","last_active":"2026-03-10T08:36:21.884217+0000","last_peered":"2026-03-10T08:36:21.884217+0000","last_clean":"2026-03-10T08:36:21.884217+0000","last_became_active":"2026-03-10T08:36:16.426149+0000","last_became_peered":"2026-03-10T08:36:16.426149+0000","last_unstale":"2026-03-10T08:36:21.884217+0000","last_undegraded":"2026-03-10T08:36:21.884217+0000","last_fullsized":"2026-03-10T08:36:21.884217+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_clean_scrub_stamp":"2026-03-10T08:36:15.387844+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:11:29.939825+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,4,3],"acting":[2,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.9","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.880669+0000","last_change":"2026-03-10T08:36:18.451537+0000","last_active":"2026-03-10T08:36:21.880669+0000","last_peered":"2026-03-10T08:36:21.880669+0000","last_clean":"2026-03-10T08:36:21.880669+0000","last_became_active":"2026-03-10T08:36:18.451466+0000","last_became_peered":"2026-03-10T08:36:18.451466+0000","last_unstale":"2026-03-10T08:36:21.880669+0000","last_undegraded":"2026-03-10T08:36:21.880669+0000","last_fullsized":"2026-03-10T08:36:21.880669+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_clean_scrub_stamp":"2026-03-10T08:36:17.394330+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:47:18.003501+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,2],"acting":[0,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"4.a","version":"54'19","reported_seq":54,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.452668+0000","last_change":"2026-03-10T08:36:14.471656+0000","last_active":"2026-03-10T08:36:21.452668+0000","last_peered":"2026-03-10T08:36:21.452668+0000","last_clean":"2026-03-10T08:36:21.452668+0000","last_became_active":"2026-03-10T08:36:14.471429+0000","last_became_peered":"2026-03-10T08:36:14.471429+0000","last_unstale":"2026-03-10T08:36:21.452668+0000","last_undegraded":"2026-03-10T08:36:21.452668+0000","last_fullsized":"2026-03-10T08:36:21.452668+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_clean_scrub_stamp":"2026-03-10T08:36:13.382053+0000","objects_scrubbed":0,"log_size":19,"log_dups_size":0,"ondisk_log_size":19,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:43:18.792865+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":9,"num_object_clones":0,"num_object_copies":27,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":9,"num_whiteouts":0,"num_read":32,"num_read_kb":21,"num_write":20,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,1,7],"acting":[6,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"3.d","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.402717+0000","last_change":"2026-03-10T08:36:12.407756+0000","last_active":"2026-03-10T08:36:21.402717+0000","last_peered":"2026-03-10T08:36:21.402717+0000","last_clean":"2026-03-10T08:36:21.402717+0000","last_became_active":"2026-03-10T08:36:12.407524+0000","last_became_peered":"2026-03-10T08:36:12.407524+0000","last_unstale":"2026-03-10T08:36:21.402717+0000","last_undegraded":"2026-03-10T08:36:21.402717+0000","last_fullsized":"2026-03-10T08:36:21.402717+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T09:39:18.816135+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,5,6],"acting":[7,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.b","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.884243+0000","last_change":"2026-03-10T08:36:16.419552+0000","last_active":"2026-03-10T08:36:21.884243+0000","last_peered":"2026-03-10T08:36:21.884243+0000","last_clean":"2026-03-10T08:36:21.884243+0000","last_became_active":"2026-03-10T08:36:16.419415+0000","last_became_peered":"2026-03-10T08:36:16.419415+0000","last_unstale":"2026-03-10T08:36:21.884243+0000","last_undegraded":"2026-03-10T08:36:21.884243+0000","last_fullsized":"2026-03-10T08:36:21.884243+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_clean_scrub_stamp":"2026-03-10T08:36:15.387844+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:20:21.971292+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,0,5],"acting":[2,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.8","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.402704+0000","last_change":"2026-03-10T08:36:18.437677+0000","last_active":"2026-03-10T08:36:21.402704+0000","last_peered":"2026-03-10T08:36:21.402704+0000","last_clean":"2026-03-10T08:36:21.402704+0000","last_became_active":"2026-03-10T08:36:18.428520+0000","last_became_peered":"2026-03-10T08:36:18.428520+0000","last_unstale":"2026-03-10T08:36:21.402704+0000","last_undegraded":"2026-03-10T08:36:21.402704+0000","last_fullsized":"2026-03-10T08:36:21.402704+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_clean_scrub_stamp":"2026-03-10T08:36:17.394330+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:04:44.619391+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,3],"acting":[7,2,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"4.9","version":"54'12","reported_seq":46,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.404808+0000","last_change":"2026-03-10T08:36:14.413066+0000","last_active":"2026-03-10T08:36:21.404808+0000","last_peered":"2026-03-10T08:36:21.404808+0000","last_clean":"2026-03-10T08:36:21.404808+0000","last_became_active":"2026-03-10T08:36:14.412541+0000","last_became_peered":"2026-03-10T08:36:14.412541+0000","last_unstale":"2026-03-10T08:36:21.404808+0000","last_undegraded":"2026-03-10T08:36:21.404808+0000","last_fullsized":"2026-03-10T08:36:21.404808+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_clean_scrub_stamp":"2026-03-10T08:36:13.382053+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T09:02:39.196023+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":25,"num_read_kb":16,"num_write":14,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,1,3],"acting":[4,1,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.e","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.399478+0000","last_change":"2026-03-10T08:36:12.408759+0000","last_active":"2026-03-10T08:36:21.399478+0000","last_peered":"2026-03-10T08:36:21.399478+0000","last_clean":"2026-03-10T08:36:21.399478+0000","last_became_active":"2026-03-10T08:36:12.408510+0000","last_became_peered":"2026-03-10T08:36:12.408510+0000","last_unstale":"2026-03-10T08:36:21.399478+0000","last_undegraded":"2026-03-10T08:36:21.399478+0000","last_fullsized":"2026-03-10T08:36:21.399478+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:41:04.656969+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,1],"acting":[7,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.8","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.884176+0000","last_change":"2026-03-10T08:36:16.419489+0000","last_active":"2026-03-10T08:36:21.884176+0000","last_peered":"2026-03-10T08:36:21.884176+0000","last_clean":"2026-03-10T08:36:21.884176+0000","last_became_active":"2026-03-10T08:36:16.419287+0000","last_became_peered":"2026-03-10T08:36:16.419287+0000","last_unstale":"2026-03-10T08:36:21.884176+0000","last_undegraded":"2026-03-10T08:36:21.884176+0000","last_fullsized":"2026-03-10T08:36:21.884176+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_clean_scrub_stamp":"2026-03-10T08:36:15.387844+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:41:36.647112+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,0,1],"acting":[2,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.b","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.886183+0000","last_change":"2026-03-10T08:36:18.453177+0000","last_active":"2026-03-10T08:36:21.886183+0000","last_peered":"2026-03-10T08:36:21.886183+0000","last_clean":"2026-03-10T08:36:21.886183+0000","last_became_active":"2026-03-10T08:36:18.453096+0000","last_became_peered":"2026-03-10T08:36:18.453096+0000","last_unstale":"2026-03-10T08:36:21.886183+0000","last_undegraded":"2026-03-10T08:36:21.886183+0000","last_fullsized":"2026-03-10T08:36:21.886183+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_clean_scrub_stamp":"2026-03-10T08:36:17.394330+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:18:05.297561+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,1],"acting":[3,7,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"4.8","version":"54'15","reported_seq":48,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.394296+0000","last_change":"2026-03-10T08:36:14.473434+0000","last_active":"2026-03-10T08:36:21.394296+0000","last_peered":"2026-03-10T08:36:21.394296+0000","last_clean":"2026-03-10T08:36:21.394296+0000","last_became_active":"2026-03-10T08:36:14.473202+0000","last_became_peered":"2026-03-10T08:36:14.473202+0000","last_unstale":"2026-03-10T08:36:21.394296+0000","last_undegraded":"2026-03-10T08:36:21.394296+0000","last_fullsized":"2026-03-10T08:36:21.394296+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_clean_scrub_stamp":"2026-03-10T08:36:13.382053+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:21:11.362393+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,7,6],"acting":[5,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.f","version":"47'2","reported_seq":39,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.399360+0000","last_change":"2026-03-10T08:36:12.408166+0000","last_active":"2026-03-10T08:36:21.399360+0000","last_peered":"2026-03-10T08:36:21.399360+0000","last_clean":"2026-03-10T08:36:21.399360+0000","last_became_active":"2026-03-10T08:36:12.408081+0000","last_became_peered":"2026-03-10T08:36:12.408081+0000","last_unstale":"2026-03-10T08:36:21.399360+0000","last_undegraded":"2026-03-10T08:36:21.399360+0000","last_fullsized":"2026-03-10T08:36:21.399360+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T09:52:25.669803+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":92,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":10,"num_read_kb":10,"num_write":4,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,0],"acting":[7,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.9","version":"54'8","reported_seq":29,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.399423+0000","last_change":"2026-03-10T08:36:16.438459+0000","last_active":"2026-03-10T08:36:21.399423+0000","last_peered":"2026-03-10T08:36:21.399423+0000","last_clean":"2026-03-10T08:36:21.399423+0000","last_became_active":"2026-03-10T08:36:16.432537+0000","last_became_peered":"2026-03-10T08:36:16.432537+0000","last_unstale":"2026-03-10T08:36:21.399423+0000","last_undegraded":"2026-03-10T08:36:21.399423+0000","last_fullsized":"2026-03-10T08:36:21.399423+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_clean_scrub_stamp":"2026-03-10T08:36:15.387844+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:04:18.646873+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,4],"acting":[7,6,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.a","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.394263+0000","last_change":"2026-03-10T08:36:18.440967+0000","last_active":"2026-03-10T08:36:21.394263+0000","last_peered":"2026-03-10T08:36:21.394263+0000","last_clean":"2026-03-10T08:36:21.394263+0000","last_became_active":"2026-03-10T08:36:18.438962+0000","last_became_peered":"2026-03-10T08:36:18.438962+0000","last_unstale":"2026-03-10T08:36:21.394263+0000","last_undegraded":"2026-03-10T08:36:21.394263+0000","last_fullsized":"2026-03-10T08:36:21.394263+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_clean_scrub_stamp":"2026-03-10T08:36:17.394330+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:55:00.104333+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,6,0],"acting":[5,6,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.10","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.452555+0000","last_change":"2026-03-10T08:36:12.414967+0000","last_active":"2026-03-10T08:36:21.452555+0000","last_peered":"2026-03-10T08:36:21.452555+0000","last_clean":"2026-03-10T08:36:21.452555+0000","last_became_active":"2026-03-10T08:36:12.414119+0000","last_became_peered":"2026-03-10T08:36:12.414119+0000","last_unstale":"2026-03-10T08:36:21.452555+0000","last_undegraded":"2026-03-10T08:36:21.452555+0000","last_fullsized":"2026-03-10T08:36:21.452555+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:22:33.807844+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,0,5],"acting":[6,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"4.17","version":"54'6","reported_seq":32,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.899015+0000","last_change":"2026-03-10T08:36:14.475483+0000","last_active":"2026-03-10T08:36:21.899015+0000","last_peered":"2026-03-10T08:36:21.899015+0000","last_clean":"2026-03-10T08:36:21.899015+0000","last_became_active":"2026-03-10T08:36:14.475341+0000","last_became_peered":"2026-03-10T08:36:14.475341+0000","last_unstale":"2026-03-10T08:36:21.899015+0000","last_undegraded":"2026-03-10T08:36:21.899015+0000","last_fullsized":"2026-03-10T08:36:21.899015+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_clean_scrub_stamp":"2026-03-10T08:36:13.382053+0000","objects_scrubbed":0,"log_size":6,"log_dups_size":0,"ondisk_log_size":6,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:14:03.228863+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":3,"num_object_clones":0,"num_object_copies":9,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":3,"num_whiteouts":0,"num_read":9,"num_read_kb":6,"num_write":6,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,1],"acting":[3,7,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.16","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.394007+0000","last_change":"2026-03-10T08:36:16.429063+0000","last_active":"2026-03-10T08:36:21.394007+0000","last_peered":"2026-03-10T08:36:21.394007+0000","last_clean":"2026-03-10T08:36:21.394007+0000","last_became_active":"2026-03-10T08:36:16.428447+0000","last_became_peered":"2026-03-10T08:36:16.428447+0000","last_unstale":"2026-03-10T08:36:21.394007+0000","last_undegraded":"2026-03-10T08:36:21.394007+0000","last_fullsized":"2026-03-10T08:36:21.394007+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_clean_scrub_stamp":"2026-03-10T08:36:15.387844+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:34:52.317817+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,1],"acting":[5,3,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.15","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.403208+0000","last_change":"2026-03-10T08:36:18.439956+0000","last_active":"2026-03-10T08:36:21.403208+0000","last_peered":"2026-03-10T08:36:21.403208+0000","last_clean":"2026-03-10T08:36:21.403208+0000","last_became_active":"2026-03-10T08:36:18.439812+0000","last_became_peered":"2026-03-10T08:36:18.439812+0000","last_unstale":"2026-03-10T08:36:21.403208+0000","last_undegraded":"2026-03-10T08:36:21.403208+0000","last_fullsized":"2026-03-10T08:36:21.403208+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_clean_scrub_stamp":"2026-03-10T08:36:17.394330+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:48:08.845489+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,4],"acting":[7,6,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"4.16","version":"54'9","reported_seq":39,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.880722+0000","last_change":"2026-03-10T08:36:14.474278+0000","last_active":"2026-03-10T08:36:21.880722+0000","last_peered":"2026-03-10T08:36:21.880722+0000","last_clean":"2026-03-10T08:36:21.880722+0000","last_became_active":"2026-03-10T08:36:14.474130+0000","last_became_peered":"2026-03-10T08:36:14.474130+0000","last_unstale":"2026-03-10T08:36:21.880722+0000","last_undegraded":"2026-03-10T08:36:21.880722+0000","last_fullsized":"2026-03-10T08:36:21.880722+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_clean_scrub_stamp":"2026-03-10T08:36:13.382053+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:42:45.302267+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,3,7],"acting":[0,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.11","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.399302+0000","last_change":"2026-03-10T08:36:12.407818+0000","last_active":"2026-03-10T08:36:21.399302+0000","last_peered":"2026-03-10T08:36:21.399302+0000","last_clean":"2026-03-10T08:36:21.399302+0000","last_became_active":"2026-03-10T08:36:12.407646+0000","last_became_peered":"2026-03-10T08:36:12.407646+0000","last_unstale":"2026-03-10T08:36:21.399302+0000","last_undegraded":"2026-03-10T08:36:21.399302+0000","last_fullsized":"2026-03-10T08:36:21.399302+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:40:01.985072+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,6],"acting":[7,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.17","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.899130+0000","last_change":"2026-03-10T08:36:16.431481+0000","last_active":"2026-03-10T08:36:21.899130+0000","last_peered":"2026-03-10T08:36:21.899130+0000","last_clean":"2026-03-10T08:36:21.899130+0000","last_became_active":"2026-03-10T08:36:16.431324+0000","last_became_peered":"2026-03-10T08:36:16.431324+0000","last_unstale":"2026-03-10T08:36:21.899130+0000","last_undegraded":"2026-03-10T08:36:21.899130+0000","last_fullsized":"2026-03-10T08:36:21.899130+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_clean_scrub_stamp":"2026-03-10T08:36:15.387844+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:05:07.198705+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,7],"acting":[3,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.14","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.884391+0000","last_change":"2026-03-10T08:36:18.450766+0000","last_active":"2026-03-10T08:36:21.884391+0000","last_peered":"2026-03-10T08:36:21.884391+0000","last_clean":"2026-03-10T08:36:21.884391+0000","last_became_active":"2026-03-10T08:36:18.450693+0000","last_became_peered":"2026-03-10T08:36:18.450693+0000","last_unstale":"2026-03-10T08:36:21.884391+0000","last_undegraded":"2026-03-10T08:36:21.884391+0000","last_fullsized":"2026-03-10T08:36:21.884391+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_clean_scrub_stamp":"2026-03-10T08:36:17.394330+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:21:17.287883+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,4,7],"acting":[2,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"4.15","version":"54'9","reported_seq":39,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.394357+0000","last_change":"2026-03-10T08:36:14.473502+0000","last_active":"2026-03-10T08:36:21.394357+0000","last_peered":"2026-03-10T08:36:21.394357+0000","last_clean":"2026-03-10T08:36:21.394357+0000","last_became_active":"2026-03-10T08:36:14.473352+0000","last_became_peered":"2026-03-10T08:36:14.473352+0000","last_unstale":"2026-03-10T08:36:21.394357+0000","last_undegraded":"2026-03-10T08:36:21.394357+0000","last_fullsized":"2026-03-10T08:36:21.394357+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_clean_scrub_stamp":"2026-03-10T08:36:13.382053+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:09:58.715484+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,7,3],"acting":[5,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.12","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.880179+0000","last_change":"2026-03-10T08:36:12.405014+0000","last_active":"2026-03-10T08:36:21.880179+0000","last_peered":"2026-03-10T08:36:21.880179+0000","last_clean":"2026-03-10T08:36:21.880179+0000","last_became_active":"2026-03-10T08:36:12.404112+0000","last_became_peered":"2026-03-10T08:36:12.404112+0000","last_unstale":"2026-03-10T08:36:21.880179+0000","last_undegraded":"2026-03-10T08:36:21.880179+0000","last_fullsized":"2026-03-10T08:36:21.880179+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:45:12.121417+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,3],"acting":[0,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.14","version":"54'8","reported_seq":29,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.886646+0000","last_change":"2026-03-10T08:36:16.431415+0000","last_active":"2026-03-10T08:36:21.886646+0000","last_peered":"2026-03-10T08:36:21.886646+0000","last_clean":"2026-03-10T08:36:21.886646+0000","last_became_active":"2026-03-10T08:36:16.431198+0000","last_became_peered":"2026-03-10T08:36:16.431198+0000","last_unstale":"2026-03-10T08:36:21.886646+0000","last_undegraded":"2026-03-10T08:36:21.886646+0000","last_fullsized":"2026-03-10T08:36:21.886646+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_clean_scrub_stamp":"2026-03-10T08:36:15.387844+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:21:44.511489+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,2],"acting":[3,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.17","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.404055+0000","last_change":"2026-03-10T08:36:18.446197+0000","last_active":"2026-03-10T08:36:21.404055+0000","last_peered":"2026-03-10T08:36:21.404055+0000","last_clean":"2026-03-10T08:36:21.404055+0000","last_became_active":"2026-03-10T08:36:18.446120+0000","last_became_peered":"2026-03-10T08:36:18.446120+0000","last_unstale":"2026-03-10T08:36:21.404055+0000","last_undegraded":"2026-03-10T08:36:21.404055+0000","last_fullsized":"2026-03-10T08:36:21.404055+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_clean_scrub_stamp":"2026-03-10T08:36:17.394330+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:46:13.704547+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,2,5],"acting":[4,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"4.14","version":"54'10","reported_seq":38,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.898972+0000","last_change":"2026-03-10T08:36:14.474243+0000","last_active":"2026-03-10T08:36:21.898972+0000","last_peered":"2026-03-10T08:36:21.898972+0000","last_clean":"2026-03-10T08:36:21.898972+0000","last_became_active":"2026-03-10T08:36:14.474103+0000","last_became_peered":"2026-03-10T08:36:14.474103+0000","last_unstale":"2026-03-10T08:36:21.898972+0000","last_undegraded":"2026-03-10T08:36:21.898972+0000","last_fullsized":"2026-03-10T08:36:21.898972+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_clean_scrub_stamp":"2026-03-10T08:36:13.382053+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:16:22.013377+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,7],"acting":[3,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.13","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.399208+0000","last_change":"2026-03-10T08:36:12.400550+0000","last_active":"2026-03-10T08:36:21.399208+0000","last_peered":"2026-03-10T08:36:21.399208+0000","last_clean":"2026-03-10T08:36:21.399208+0000","last_became_active":"2026-03-10T08:36:12.400206+0000","last_became_peered":"2026-03-10T08:36:12.400206+0000","last_unstale":"2026-03-10T08:36:21.399208+0000","last_undegraded":"2026-03-10T08:36:21.399208+0000","last_fullsized":"2026-03-10T08:36:21.399208+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:52:05.723594+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,2],"acting":[7,4,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.15","version":"54'8","reported_seq":29,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.394508+0000","last_change":"2026-03-10T08:36:16.417048+0000","last_active":"2026-03-10T08:36:21.394508+0000","last_peered":"2026-03-10T08:36:21.394508+0000","last_clean":"2026-03-10T08:36:21.394508+0000","last_became_active":"2026-03-10T08:36:16.416864+0000","last_became_peered":"2026-03-10T08:36:16.416864+0000","last_unstale":"2026-03-10T08:36:21.394508+0000","last_undegraded":"2026-03-10T08:36:21.394508+0000","last_fullsized":"2026-03-10T08:36:21.394508+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_clean_scrub_stamp":"2026-03-10T08:36:15.387844+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:20:41.922881+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.16","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.879959+0000","last_change":"2026-03-10T08:36:18.451235+0000","last_active":"2026-03-10T08:36:21.879959+0000","last_peered":"2026-03-10T08:36:21.879959+0000","last_clean":"2026-03-10T08:36:21.879959+0000","last_became_active":"2026-03-10T08:36:18.451125+0000","last_became_peered":"2026-03-10T08:36:18.451125+0000","last_unstale":"2026-03-10T08:36:21.879959+0000","last_undegraded":"2026-03-10T08:36:21.879959+0000","last_fullsized":"2026-03-10T08:36:21.879959+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_clean_scrub_stamp":"2026-03-10T08:36:17.394330+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T09:15:03.190145+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,3],"acting":[0,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"4.13","version":"54'11","reported_seq":42,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.404419+0000","last_change":"2026-03-10T08:36:14.413718+0000","last_active":"2026-03-10T08:36:21.404419+0000","last_peered":"2026-03-10T08:36:21.404419+0000","last_clean":"2026-03-10T08:36:21.404419+0000","last_became_active":"2026-03-10T08:36:14.412454+0000","last_became_peered":"2026-03-10T08:36:14.412454+0000","last_unstale":"2026-03-10T08:36:21.404419+0000","last_undegraded":"2026-03-10T08:36:21.404419+0000","last_fullsized":"2026-03-10T08:36:21.404419+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_clean_scrub_stamp":"2026-03-10T08:36:13.382053+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:20:31.986118+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,1],"acting":[4,6,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.14","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.404396+0000","last_change":"2026-03-10T08:36:12.411048+0000","last_active":"2026-03-10T08:36:21.404396+0000","last_peered":"2026-03-10T08:36:21.404396+0000","last_clean":"2026-03-10T08:36:21.404396+0000","last_became_active":"2026-03-10T08:36:12.410963+0000","last_became_peered":"2026-03-10T08:36:12.410963+0000","last_unstale":"2026-03-10T08:36:21.404396+0000","last_undegraded":"2026-03-10T08:36:21.404396+0000","last_fullsized":"2026-03-10T08:36:21.404396+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T09:02:42.488859+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,7,6],"acting":[4,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.12","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.892845+0000","last_change":"2026-03-10T08:36:16.416965+0000","last_active":"2026-03-10T08:36:21.892845+0000","last_peered":"2026-03-10T08:36:21.892845+0000","last_clean":"2026-03-10T08:36:21.892845+0000","last_became_active":"2026-03-10T08:36:16.416880+0000","last_became_peered":"2026-03-10T08:36:16.416880+0000","last_unstale":"2026-03-10T08:36:21.892845+0000","last_undegraded":"2026-03-10T08:36:21.892845+0000","last_fullsized":"2026-03-10T08:36:21.892845+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_clean_scrub_stamp":"2026-03-10T08:36:15.387844+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:50:49.036177+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,3],"acting":[1,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.11","version":"54'1","reported_seq":16,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.886016+0000","last_change":"2026-03-10T08:36:18.428754+0000","last_active":"2026-03-10T08:36:21.886016+0000","last_peered":"2026-03-10T08:36:21.886016+0000","last_clean":"2026-03-10T08:36:21.886016+0000","last_became_active":"2026-03-10T08:36:18.428686+0000","last_became_peered":"2026-03-10T08:36:18.428686+0000","last_unstale":"2026-03-10T08:36:21.886016+0000","last_undegraded":"2026-03-10T08:36:21.886016+0000","last_fullsized":"2026-03-10T08:36:21.886016+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_clean_scrub_stamp":"2026-03-10T08:36:17.394330+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:31:01.768661+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":13,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,5],"acting":[3,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"4.12","version":"54'9","reported_seq":39,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.892147+0000","last_change":"2026-03-10T08:36:14.411554+0000","last_active":"2026-03-10T08:36:21.892147+0000","last_peered":"2026-03-10T08:36:21.892147+0000","last_clean":"2026-03-10T08:36:21.892147+0000","last_became_active":"2026-03-10T08:36:14.411179+0000","last_became_peered":"2026-03-10T08:36:14.411179+0000","last_unstale":"2026-03-10T08:36:21.892147+0000","last_undegraded":"2026-03-10T08:36:21.892147+0000","last_fullsized":"2026-03-10T08:36:21.892147+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_clean_scrub_stamp":"2026-03-10T08:36:13.382053+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:15:58.327178+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,6,2],"acting":[1,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.15","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.399152+0000","last_change":"2026-03-10T08:36:12.407283+0000","last_active":"2026-03-10T08:36:21.399152+0000","last_peered":"2026-03-10T08:36:21.399152+0000","last_clean":"2026-03-10T08:36:21.399152+0000","last_became_active":"2026-03-10T08:36:12.401220+0000","last_became_peered":"2026-03-10T08:36:12.401220+0000","last_unstale":"2026-03-10T08:36:21.399152+0000","last_undegraded":"2026-03-10T08:36:21.399152+0000","last_fullsized":"2026-03-10T08:36:21.399152+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:05:13.110333+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,3,4],"acting":[7,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.13","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.886367+0000","last_change":"2026-03-10T08:36:16.420867+0000","last_active":"2026-03-10T08:36:21.886367+0000","last_peered":"2026-03-10T08:36:21.886367+0000","last_clean":"2026-03-10T08:36:21.886367+0000","last_became_active":"2026-03-10T08:36:16.420767+0000","last_became_peered":"2026-03-10T08:36:16.420767+0000","last_unstale":"2026-03-10T08:36:21.886367+0000","last_undegraded":"2026-03-10T08:36:21.886367+0000","last_fullsized":"2026-03-10T08:36:21.886367+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_clean_scrub_stamp":"2026-03-10T08:36:15.387844+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:39:56.108735+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,1],"acting":[3,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.10","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.879888+0000","last_change":"2026-03-10T08:36:18.420018+0000","last_active":"2026-03-10T08:36:21.879888+0000","last_peered":"2026-03-10T08:36:21.879888+0000","last_clean":"2026-03-10T08:36:21.879888+0000","last_became_active":"2026-03-10T08:36:18.419939+0000","last_became_peered":"2026-03-10T08:36:18.419939+0000","last_unstale":"2026-03-10T08:36:21.879888+0000","last_undegraded":"2026-03-10T08:36:21.879888+0000","last_fullsized":"2026-03-10T08:36:21.879888+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_clean_scrub_stamp":"2026-03-10T08:36:17.394330+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:37:12.387445+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,1],"acting":[0,5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"4.11","version":"54'11","reported_seq":42,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.898890+0000","last_change":"2026-03-10T08:36:14.474655+0000","last_active":"2026-03-10T08:36:21.898890+0000","last_peered":"2026-03-10T08:36:21.898890+0000","last_clean":"2026-03-10T08:36:21.898890+0000","last_became_active":"2026-03-10T08:36:14.474518+0000","last_became_peered":"2026-03-10T08:36:14.474518+0000","last_unstale":"2026-03-10T08:36:21.898890+0000","last_undegraded":"2026-03-10T08:36:21.898890+0000","last_fullsized":"2026-03-10T08:36:21.898890+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_clean_scrub_stamp":"2026-03-10T08:36:13.382053+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:48:07.890078+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,6],"acting":[3,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.16","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.393856+0000","last_change":"2026-03-10T08:36:12.397950+0000","last_active":"2026-03-10T08:36:21.393856+0000","last_peered":"2026-03-10T08:36:21.393856+0000","last_clean":"2026-03-10T08:36:21.393856+0000","last_became_active":"2026-03-10T08:36:12.397528+0000","last_became_peered":"2026-03-10T08:36:12.397528+0000","last_unstale":"2026-03-10T08:36:21.393856+0000","last_undegraded":"2026-03-10T08:36:21.393856+0000","last_fullsized":"2026-03-10T08:36:21.393856+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:49:35.407002+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,7,1],"acting":[5,7,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.10","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.402867+0000","last_change":"2026-03-10T08:36:16.432068+0000","last_active":"2026-03-10T08:36:21.402867+0000","last_peered":"2026-03-10T08:36:21.402867+0000","last_clean":"2026-03-10T08:36:21.402867+0000","last_became_active":"2026-03-10T08:36:16.431964+0000","last_became_peered":"2026-03-10T08:36:16.431964+0000","last_unstale":"2026-03-10T08:36:21.402867+0000","last_undegraded":"2026-03-10T08:36:21.402867+0000","last_fullsized":"2026-03-10T08:36:21.402867+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_clean_scrub_stamp":"2026-03-10T08:36:15.387844+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T09:07:17.951369+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,6],"acting":[7,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.13","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.886875+0000","last_change":"2026-03-10T08:36:18.426029+0000","last_active":"2026-03-10T08:36:21.886875+0000","last_peered":"2026-03-10T08:36:21.886875+0000","last_clean":"2026-03-10T08:36:21.886875+0000","last_became_active":"2026-03-10T08:36:18.425926+0000","last_became_peered":"2026-03-10T08:36:18.425926+0000","last_unstale":"2026-03-10T08:36:21.886875+0000","last_undegraded":"2026-03-10T08:36:21.886875+0000","last_fullsized":"2026-03-10T08:36:21.886875+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_clean_scrub_stamp":"2026-03-10T08:36:17.394330+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:53:20.882581+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,6],"acting":[3,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"4.10","version":"54'4","reported_seq":29,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.886280+0000","last_change":"2026-03-10T08:36:14.413500+0000","last_active":"2026-03-10T08:36:21.886280+0000","last_peered":"2026-03-10T08:36:21.886280+0000","last_clean":"2026-03-10T08:36:21.886280+0000","last_became_active":"2026-03-10T08:36:14.413351+0000","last_became_peered":"2026-03-10T08:36:14.413351+0000","last_unstale":"2026-03-10T08:36:21.886280+0000","last_undegraded":"2026-03-10T08:36:21.886280+0000","last_fullsized":"2026-03-10T08:36:21.886280+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_clean_scrub_stamp":"2026-03-10T08:36:13.382053+0000","objects_scrubbed":0,"log_size":4,"log_dups_size":0,"ondisk_log_size":4,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:18:12.192762+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":6,"num_read_kb":4,"num_write":4,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,6],"acting":[3,1,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.17","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.880155+0000","last_change":"2026-03-10T08:36:12.410694+0000","last_active":"2026-03-10T08:36:21.880155+0000","last_peered":"2026-03-10T08:36:21.880155+0000","last_clean":"2026-03-10T08:36:21.880155+0000","last_became_active":"2026-03-10T08:36:12.404724+0000","last_became_peered":"2026-03-10T08:36:12.404724+0000","last_unstale":"2026-03-10T08:36:21.880155+0000","last_undegraded":"2026-03-10T08:36:21.880155+0000","last_fullsized":"2026-03-10T08:36:21.880155+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:29:34.907116+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,3],"acting":[0,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.11","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.452987+0000","last_change":"2026-03-10T08:36:16.433406+0000","last_active":"2026-03-10T08:36:21.452987+0000","last_peered":"2026-03-10T08:36:21.452987+0000","last_clean":"2026-03-10T08:36:21.452987+0000","last_became_active":"2026-03-10T08:36:16.433216+0000","last_became_peered":"2026-03-10T08:36:16.433216+0000","last_unstale":"2026-03-10T08:36:21.452987+0000","last_undegraded":"2026-03-10T08:36:21.452987+0000","last_fullsized":"2026-03-10T08:36:21.452987+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_clean_scrub_stamp":"2026-03-10T08:36:15.387844+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:05:50.448617+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"6.12","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.398984+0000","last_change":"2026-03-10T08:36:18.439895+0000","last_active":"2026-03-10T08:36:21.398984+0000","last_peered":"2026-03-10T08:36:21.398984+0000","last_clean":"2026-03-10T08:36:21.398984+0000","last_became_active":"2026-03-10T08:36:18.439622+0000","last_became_peered":"2026-03-10T08:36:18.439622+0000","last_unstale":"2026-03-10T08:36:21.398984+0000","last_undegraded":"2026-03-10T08:36:21.398984+0000","last_fullsized":"2026-03-10T08:36:21.398984+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_clean_scrub_stamp":"2026-03-10T08:36:17.394330+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:01:40.457227+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,4],"acting":[7,2,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.1d","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.892248+0000","last_change":"2026-03-10T08:36:18.434413+0000","last_active":"2026-03-10T08:36:21.892248+0000","last_peered":"2026-03-10T08:36:21.892248+0000","last_clean":"2026-03-10T08:36:21.892248+0000","last_became_active":"2026-03-10T08:36:18.434322+0000","last_became_peered":"2026-03-10T08:36:18.434322+0000","last_unstale":"2026-03-10T08:36:21.892248+0000","last_undegraded":"2026-03-10T08:36:21.892248+0000","last_fullsized":"2026-03-10T08:36:21.892248+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_clean_scrub_stamp":"2026-03-10T08:36:17.394330+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:31:50.758084+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,4],"acting":[1,5,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.18","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.886792+0000","last_change":"2026-03-10T08:36:12.424692+0000","last_active":"2026-03-10T08:36:21.886792+0000","last_peered":"2026-03-10T08:36:21.886792+0000","last_clean":"2026-03-10T08:36:21.886792+0000","last_became_active":"2026-03-10T08:36:12.424496+0000","last_became_peered":"2026-03-10T08:36:12.424496+0000","last_unstale":"2026-03-10T08:36:21.886792+0000","last_undegraded":"2026-03-10T08:36:21.886792+0000","last_fullsized":"2026-03-10T08:36:21.886792+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:01:34.197675+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,1],"acting":[3,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"4.1f","version":"54'11","reported_seq":42,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.452710+0000","last_change":"2026-03-10T08:36:14.410903+0000","last_active":"2026-03-10T08:36:21.452710+0000","last_peered":"2026-03-10T08:36:21.452710+0000","last_clean":"2026-03-10T08:36:21.452710+0000","last_became_active":"2026-03-10T08:36:14.410711+0000","last_became_peered":"2026-03-10T08:36:14.410711+0000","last_unstale":"2026-03-10T08:36:21.452710+0000","last_undegraded":"2026-03-10T08:36:21.452710+0000","last_fullsized":"2026-03-10T08:36:21.452710+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_clean_scrub_stamp":"2026-03-10T08:36:13.382053+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T09:29:39.210411+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,5,1],"acting":[6,5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"5.1e","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.880583+0000","last_change":"2026-03-10T08:36:16.431631+0000","last_active":"2026-03-10T08:36:21.880583+0000","last_peered":"2026-03-10T08:36:21.880583+0000","last_clean":"2026-03-10T08:36:21.880583+0000","last_became_active":"2026-03-10T08:36:16.431551+0000","last_became_peered":"2026-03-10T08:36:16.431551+0000","last_unstale":"2026-03-10T08:36:21.880583+0000","last_undegraded":"2026-03-10T08:36:21.880583+0000","last_fullsized":"2026-03-10T08:36:21.880583+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_clean_scrub_stamp":"2026-03-10T08:36:15.387844+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:42:09.312170+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,2],"acting":[0,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]}],"pool_stats":[{"poolid":6,"num_pg":32,"stat_sum":{"num_bytes":416,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":3,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":24576,"data_stored":1248,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":2,"ondisk_log_size":2,"up":96,"acting":96,"num_store_stats":8},{"poolid":5,"num_pg":32,"stat_sum":{"num_bytes":0,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":64,"ondisk_log_size":64,"up":96,"acting":96,"num_store_stats":8},{"poolid":4,"num_pg":32,"stat_sum":{"num_bytes":3702,"num_objects":178,"num_object_clones":0,"num_object_copies":534,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":178,"num_whiteouts":0,"num_read":698,"num_read_kb":455,"num_write":417,"num_write_kb":34,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":417792,"data_stored":11106,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":393,"ondisk_log_size":393,"up":96,"acting":96,"num_store_stats":8},{"poolid":3,"num_pg":32,"stat_sum":{"num_bytes":1613,"num_objects":6,"num_object_clones":0,"num_object_copies":18,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":6,"num_whiteouts":0,"num_read":24,"num_read_kb":24,"num_write":10,"num_write_kb":6,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":73728,"data_stored":4839,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":6,"ondisk_log_size":6,"up":96,"acting":96,"num_store_stats":8},{"poolid":2,"num_pg":3,"stat_sum":{"num_bytes":408,"num_objects":3,"num_object_clones":0,"num_object_copies":9,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":3,"num_whiteouts":0,"num_read":6,"num_read_kb":1,"num_write":6,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":24576,"data_stored":1224,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":8,"ondisk_log_size":8,"up":9,"acting":9,"num_store_stats":7},{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":2314240,"data_stored":2296400,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":7}],"osd_stats":[{"osd":7,"up_from":43,"seq":184683593732,"num_pgs":53,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27836,"kb_used_data":1000,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939588,"statfs":{"total":21470642176,"available":21442138112,"internally_reserved":0,"allocated":1024000,"data_stored":672045,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1585,"internal_metadata":27457999},"hb_peers":[0,1,2,3,4,5,6],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":6,"up_from":38,"seq":163208757255,"num_pgs":43,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27820,"kb_used_data":980,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939604,"statfs":{"total":21470642176,"available":21442154496,"internally_reserved":0,"allocated":1003520,"data_stored":670960,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":27457995},"hb_peers":[0,1,2,3,4,5,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":5,"up_from":33,"seq":141733920777,"num_pgs":33,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27380,"kb_used_data":540,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940044,"statfs":{"total":21470642176,"available":21442605056,"internally_reserved":0,"allocated":552960,"data_stored":212488,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,4,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":1,"apply_latency_ms":1,"commit_latency_ns":1000000,"apply_latency_ns":1000000},"alerts":[]},{"osd":4,"up_from":28,"seq":120259084299,"num_pgs":51,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27380,"kb_used_data":548,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940044,"statfs":{"total":21470642176,"available":21442605056,"internally_reserved":0,"allocated":561152,"data_stored":207128,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":7,"apply_latency_ms":7,"commit_latency_ns":7000000,"apply_latency_ns":7000000},"alerts":[]},{"osd":3,"up_from":23,"seq":98784247821,"num_pgs":56,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27396,"kb_used_data":560,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940028,"statfs":{"total":21470642176,"available":21442588672,"internally_reserved":0,"allocated":573440,"data_stored":207427,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1588,"internal_metadata":27457996},"hb_peers":[0,1,2,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":9,"apply_latency_ms":9,"commit_latency_ns":9000000,"apply_latency_ns":9000000},"alerts":[]},{"osd":2,"up_from":16,"seq":68719476752,"num_pgs":36,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27364,"kb_used_data":528,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940060,"statfs":{"total":21470642176,"available":21442621440,"internally_reserved":0,"allocated":540672,"data_stored":212264,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":1,"up_from":12,"seq":51539607570,"num_pgs":57,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27436,"kb_used_data":600,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939988,"statfs":{"total":21470642176,"available":21442547712,"internally_reserved":0,"allocated":614400,"data_stored":214894,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":8,"apply_latency_ms":8,"commit_latency_ns":8000000,"apply_latency_ns":8000000},"alerts":[]},{"osd":0,"up_from":8,"seq":34359738388,"num_pgs":46,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27844,"kb_used_data":1008,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939580,"statfs":{"total":21470642176,"available":21442129920,"internally_reserved":0,"allocated":1032192,"data_stored":671767,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[1,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":1,"apply_latency_ms":1,"commit_latency_ns":1000000,"apply_latency_ns":1000000},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":408,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":12288,"data_stored":138,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":16384,"data_stored":1521,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":436,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":1039,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":20480,"data_stored":1177,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":436,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":92,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":49152,"data_stored":1320,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":90112,"data_stored":2338,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":32768,"data_stored":798,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":73728,"data_stored":1898,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":53248,"data_stored":1474,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":36864,"data_stored":990,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":36864,"data_stored":1034,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":45056,"data_stored":1254,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":13,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":403,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":13,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":416,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":403,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-10T08:36:25.207 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- ceph pg dump --format=json 2026-03-10T08:36:25.431 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.a/config 2026-03-10T08:36:25.461 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:25 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/2531564872' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T08:36:25.461 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:25 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/3906788971' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-10T08:36:25.461 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:25 vm03 ceph-mon[57160]: from='client.14682 v1:192.168.123.103:0/1032725500' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T08:36:25.461 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:25 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/2531564872' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T08:36:25.461 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:25 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/3906788971' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-10T08:36:25.461 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:25 vm03 ceph-mon[50703]: from='client.14682 v1:192.168.123.103:0/1032725500' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T08:36:25.675 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:36:25.679 INFO:teuthology.orchestra.run.vm03.stderr:dumped all 2026-03-10T08:36:25.704 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:25 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/2531564872' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T08:36:25.704 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:25 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/3906788971' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-10T08:36:25.704 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:25 vm06 ceph-mon[54477]: from='client.14682 v1:192.168.123.103:0/1032725500' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T08:36:25.741 INFO:teuthology.orchestra.run.vm03.stdout:{"pg_ready":true,"pg_map":{"version":112,"stamp":"2026-03-10T08:36:24.266906+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":465419,"num_objects":199,"num_object_clones":0,"num_object_copies":597,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":199,"num_whiteouts":0,"num_read":774,"num_read_kb":517,"num_write":493,"num_write_kb":629,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":505,"ondisk_log_size":505,"up":396,"acting":396,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":375,"num_osds":8,"num_per_pool_osds":8,"num_per_pool_omap_osds":8,"kb":167739392,"kb_used":220456,"kb_used_data":5764,"kb_used_omap":12,"kb_used_meta":214515,"kb_avail":167518936,"statfs":{"total":171765137408,"available":171539390464,"internally_reserved":0,"allocated":5902336,"data_stored":3068973,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":12712,"internal_metadata":219663960},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":26,"apply_latency_ms":26,"commit_latency_ns":26000000,"apply_latency_ns":26000000},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":4325,"num_objects":186,"num_object_clones":0,"num_object_copies":558,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":186,"num_whiteouts":0,"num_read":704,"num_read_kb":460,"num_write":421,"num_write_kb":35,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"6.001260"},"pg_stats":[{"pgid":"3.1f","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.880307+0000","last_change":"2026-03-10T08:36:12.411018+0000","last_active":"2026-03-10T08:36:21.880307+0000","last_peered":"2026-03-10T08:36:21.880307+0000","last_clean":"2026-03-10T08:36:21.880307+0000","last_became_active":"2026-03-10T08:36:12.404523+0000","last_became_peered":"2026-03-10T08:36:12.404523+0000","last_unstale":"2026-03-10T08:36:21.880307+0000","last_undegraded":"2026-03-10T08:36:21.880307+0000","last_fullsized":"2026-03-10T08:36:21.880307+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:22:43.385552+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,2],"acting":[0,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"4.18","version":"54'9","reported_seq":39,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.406454+0000","last_change":"2026-03-10T08:36:14.410669+0000","last_active":"2026-03-10T08:36:21.406454+0000","last_peered":"2026-03-10T08:36:21.406454+0000","last_clean":"2026-03-10T08:36:21.406454+0000","last_became_active":"2026-03-10T08:36:14.410443+0000","last_became_peered":"2026-03-10T08:36:14.410443+0000","last_unstale":"2026-03-10T08:36:21.406454+0000","last_undegraded":"2026-03-10T08:36:21.406454+0000","last_fullsized":"2026-03-10T08:36:21.406454+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_clean_scrub_stamp":"2026-03-10T08:36:13.382053+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:32:55.455883+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,1],"acting":[4,6,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.19","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.893165+0000","last_change":"2026-03-10T08:36:16.424737+0000","last_active":"2026-03-10T08:36:21.893165+0000","last_peered":"2026-03-10T08:36:21.893165+0000","last_clean":"2026-03-10T08:36:21.893165+0000","last_became_active":"2026-03-10T08:36:16.424616+0000","last_became_peered":"2026-03-10T08:36:16.424616+0000","last_unstale":"2026-03-10T08:36:21.893165+0000","last_undegraded":"2026-03-10T08:36:21.893165+0000","last_fullsized":"2026-03-10T08:36:21.893165+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_clean_scrub_stamp":"2026-03-10T08:36:15.387844+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T09:58:45.690547+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,7],"acting":[1,5,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.1a","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.406428+0000","last_change":"2026-03-10T08:36:18.445517+0000","last_active":"2026-03-10T08:36:21.406428+0000","last_peered":"2026-03-10T08:36:21.406428+0000","last_clean":"2026-03-10T08:36:21.406428+0000","last_became_active":"2026-03-10T08:36:18.445429+0000","last_became_peered":"2026-03-10T08:36:18.445429+0000","last_unstale":"2026-03-10T08:36:21.406428+0000","last_undegraded":"2026-03-10T08:36:21.406428+0000","last_fullsized":"2026-03-10T08:36:21.406428+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_clean_scrub_stamp":"2026-03-10T08:36:17.394330+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:41:48.603648+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,1],"acting":[4,5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"6.1b","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.886485+0000","last_change":"2026-03-10T08:36:18.452759+0000","last_active":"2026-03-10T08:36:21.886485+0000","last_peered":"2026-03-10T08:36:21.886485+0000","last_clean":"2026-03-10T08:36:21.886485+0000","last_became_active":"2026-03-10T08:36:18.452416+0000","last_became_peered":"2026-03-10T08:36:18.452416+0000","last_unstale":"2026-03-10T08:36:21.886485+0000","last_undegraded":"2026-03-10T08:36:21.886485+0000","last_fullsized":"2026-03-10T08:36:21.886485+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_clean_scrub_stamp":"2026-03-10T08:36:17.394330+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:42:23.883890+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,6],"acting":[3,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.1e","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.886510+0000","last_change":"2026-03-10T08:36:12.412903+0000","last_active":"2026-03-10T08:36:21.886510+0000","last_peered":"2026-03-10T08:36:21.886510+0000","last_clean":"2026-03-10T08:36:21.886510+0000","last_became_active":"2026-03-10T08:36:12.412759+0000","last_became_peered":"2026-03-10T08:36:12.412759+0000","last_unstale":"2026-03-10T08:36:21.886510+0000","last_undegraded":"2026-03-10T08:36:21.886510+0000","last_fullsized":"2026-03-10T08:36:21.886510+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:38:34.431855+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,2],"acting":[3,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"4.19","version":"54'15","reported_seq":48,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.886536+0000","last_change":"2026-03-10T08:36:14.407669+0000","last_active":"2026-03-10T08:36:21.886536+0000","last_peered":"2026-03-10T08:36:21.886536+0000","last_clean":"2026-03-10T08:36:21.886536+0000","last_became_active":"2026-03-10T08:36:14.407572+0000","last_became_peered":"2026-03-10T08:36:14.407572+0000","last_unstale":"2026-03-10T08:36:21.886536+0000","last_undegraded":"2026-03-10T08:36:21.886536+0000","last_fullsized":"2026-03-10T08:36:21.886536+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_clean_scrub_stamp":"2026-03-10T08:36:13.382053+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:42:57.447641+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,2,0],"acting":[3,2,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.18","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.404044+0000","last_change":"2026-03-10T08:36:16.435838+0000","last_active":"2026-03-10T08:36:21.404044+0000","last_peered":"2026-03-10T08:36:21.404044+0000","last_clean":"2026-03-10T08:36:21.404044+0000","last_became_active":"2026-03-10T08:36:16.435749+0000","last_became_peered":"2026-03-10T08:36:16.435749+0000","last_unstale":"2026-03-10T08:36:21.404044+0000","last_undegraded":"2026-03-10T08:36:21.404044+0000","last_fullsized":"2026-03-10T08:36:21.404044+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_clean_scrub_stamp":"2026-03-10T08:36:15.387844+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:00:26.892511+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,1],"acting":[4,6,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.1d","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.393524+0000","last_change":"2026-03-10T08:36:12.406510+0000","last_active":"2026-03-10T08:36:21.393524+0000","last_peered":"2026-03-10T08:36:21.393524+0000","last_clean":"2026-03-10T08:36:21.393524+0000","last_became_active":"2026-03-10T08:36:12.406315+0000","last_became_peered":"2026-03-10T08:36:12.406315+0000","last_unstale":"2026-03-10T08:36:21.393524+0000","last_undegraded":"2026-03-10T08:36:21.393524+0000","last_fullsized":"2026-03-10T08:36:21.393524+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T09:47:24.821385+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,6],"acting":[5,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"4.1a","version":"54'9","reported_seq":39,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.404644+0000","last_change":"2026-03-10T08:36:14.408889+0000","last_active":"2026-03-10T08:36:21.404644+0000","last_peered":"2026-03-10T08:36:21.404644+0000","last_clean":"2026-03-10T08:36:21.404644+0000","last_became_active":"2026-03-10T08:36:14.408814+0000","last_became_peered":"2026-03-10T08:36:14.408814+0000","last_unstale":"2026-03-10T08:36:21.404644+0000","last_undegraded":"2026-03-10T08:36:21.404644+0000","last_fullsized":"2026-03-10T08:36:21.404644+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_clean_scrub_stamp":"2026-03-10T08:36:13.382053+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:14:59.985223+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,0],"acting":[4,3,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.1b","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.393587+0000","last_change":"2026-03-10T08:36:16.429130+0000","last_active":"2026-03-10T08:36:21.393587+0000","last_peered":"2026-03-10T08:36:21.393587+0000","last_clean":"2026-03-10T08:36:21.393587+0000","last_became_active":"2026-03-10T08:36:16.428583+0000","last_became_peered":"2026-03-10T08:36:16.428583+0000","last_unstale":"2026-03-10T08:36:21.393587+0000","last_undegraded":"2026-03-10T08:36:21.393587+0000","last_fullsized":"2026-03-10T08:36:21.393587+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_clean_scrub_stamp":"2026-03-10T08:36:15.387844+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:38:43.808128+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,0,7],"acting":[5,0,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.18","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.880353+0000","last_change":"2026-03-10T08:36:18.451823+0000","last_active":"2026-03-10T08:36:21.880353+0000","last_peered":"2026-03-10T08:36:21.880353+0000","last_clean":"2026-03-10T08:36:21.880353+0000","last_became_active":"2026-03-10T08:36:18.451739+0000","last_became_peered":"2026-03-10T08:36:18.451739+0000","last_unstale":"2026-03-10T08:36:21.880353+0000","last_undegraded":"2026-03-10T08:36:21.880353+0000","last_fullsized":"2026-03-10T08:36:21.880353+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_clean_scrub_stamp":"2026-03-10T08:36:17.394330+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:03:15.740181+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,7],"acting":[0,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.1c","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.393782+0000","last_change":"2026-03-10T08:36:12.393117+0000","last_active":"2026-03-10T08:36:21.393782+0000","last_peered":"2026-03-10T08:36:21.393782+0000","last_clean":"2026-03-10T08:36:21.393782+0000","last_became_active":"2026-03-10T08:36:12.392842+0000","last_became_peered":"2026-03-10T08:36:12.392842+0000","last_unstale":"2026-03-10T08:36:21.393782+0000","last_undegraded":"2026-03-10T08:36:21.393782+0000","last_fullsized":"2026-03-10T08:36:21.393782+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:23:14.416398+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,1],"acting":[5,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"4.1b","version":"54'5","reported_seq":33,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.404313+0000","last_change":"2026-03-10T08:36:14.413566+0000","last_active":"2026-03-10T08:36:21.404313+0000","last_peered":"2026-03-10T08:36:21.404313+0000","last_clean":"2026-03-10T08:36:21.404313+0000","last_became_active":"2026-03-10T08:36:14.413188+0000","last_became_peered":"2026-03-10T08:36:14.413188+0000","last_unstale":"2026-03-10T08:36:21.404313+0000","last_undegraded":"2026-03-10T08:36:21.404313+0000","last_fullsized":"2026-03-10T08:36:21.404313+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_clean_scrub_stamp":"2026-03-10T08:36:13.382053+0000","objects_scrubbed":0,"log_size":5,"log_dups_size":0,"ondisk_log_size":5,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:58:32.802328+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":11,"num_read_kb":7,"num_write":6,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,1],"acting":[4,3,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.1a","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.399119+0000","last_change":"2026-03-10T08:36:16.438694+0000","last_active":"2026-03-10T08:36:21.399119+0000","last_peered":"2026-03-10T08:36:21.399119+0000","last_clean":"2026-03-10T08:36:21.399119+0000","last_became_active":"2026-03-10T08:36:16.432482+0000","last_became_peered":"2026-03-10T08:36:16.432482+0000","last_unstale":"2026-03-10T08:36:21.399119+0000","last_undegraded":"2026-03-10T08:36:21.399119+0000","last_fullsized":"2026-03-10T08:36:21.399119+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_clean_scrub_stamp":"2026-03-10T08:36:15.387844+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:30:46.058949+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,1],"acting":[7,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.19","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.393750+0000","last_change":"2026-03-10T08:36:18.441090+0000","last_active":"2026-03-10T08:36:21.393750+0000","last_peered":"2026-03-10T08:36:21.393750+0000","last_clean":"2026-03-10T08:36:21.393750+0000","last_became_active":"2026-03-10T08:36:18.441016+0000","last_became_peered":"2026-03-10T08:36:18.441016+0000","last_unstale":"2026-03-10T08:36:21.393750+0000","last_undegraded":"2026-03-10T08:36:21.393750+0000","last_fullsized":"2026-03-10T08:36:21.393750+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_clean_scrub_stamp":"2026-03-10T08:36:17.394330+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:35:48.763286+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,3],"acting":[5,1,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.1e","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.404949+0000","last_change":"2026-03-10T08:36:18.445826+0000","last_active":"2026-03-10T08:36:21.404949+0000","last_peered":"2026-03-10T08:36:21.404949+0000","last_clean":"2026-03-10T08:36:21.404949+0000","last_became_active":"2026-03-10T08:36:18.445743+0000","last_became_peered":"2026-03-10T08:36:18.445743+0000","last_unstale":"2026-03-10T08:36:21.404949+0000","last_undegraded":"2026-03-10T08:36:21.404949+0000","last_fullsized":"2026-03-10T08:36:21.404949+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_clean_scrub_stamp":"2026-03-10T08:36:17.394330+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:09:49.924793+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,5],"acting":[4,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.1b","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.880127+0000","last_change":"2026-03-10T08:36:12.410439+0000","last_active":"2026-03-10T08:36:21.880127+0000","last_peered":"2026-03-10T08:36:21.880127+0000","last_clean":"2026-03-10T08:36:21.880127+0000","last_became_active":"2026-03-10T08:36:12.404612+0000","last_became_peered":"2026-03-10T08:36:12.404612+0000","last_unstale":"2026-03-10T08:36:21.880127+0000","last_undegraded":"2026-03-10T08:36:21.880127+0000","last_fullsized":"2026-03-10T08:36:21.880127+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:14:38.545641+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,4,7],"acting":[0,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"4.1c","version":"54'15","reported_seq":48,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.883710+0000","last_change":"2026-03-10T08:36:14.410151+0000","last_active":"2026-03-10T08:36:21.883710+0000","last_peered":"2026-03-10T08:36:21.883710+0000","last_clean":"2026-03-10T08:36:21.883710+0000","last_became_active":"2026-03-10T08:36:14.410061+0000","last_became_peered":"2026-03-10T08:36:14.410061+0000","last_unstale":"2026-03-10T08:36:21.883710+0000","last_undegraded":"2026-03-10T08:36:21.883710+0000","last_fullsized":"2026-03-10T08:36:21.883710+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_clean_scrub_stamp":"2026-03-10T08:36:13.382053+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:49:33.285694+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,1,3],"acting":[2,1,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.1d","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.893214+0000","last_change":"2026-03-10T08:36:16.427541+0000","last_active":"2026-03-10T08:36:21.893214+0000","last_peered":"2026-03-10T08:36:21.893214+0000","last_clean":"2026-03-10T08:36:21.893214+0000","last_became_active":"2026-03-10T08:36:16.427424+0000","last_became_peered":"2026-03-10T08:36:16.427424+0000","last_unstale":"2026-03-10T08:36:21.893214+0000","last_undegraded":"2026-03-10T08:36:21.893214+0000","last_fullsized":"2026-03-10T08:36:21.893214+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_clean_scrub_stamp":"2026-03-10T08:36:15.387844+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T09:00:59.337209+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,4,0],"acting":[1,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.1f","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.899043+0000","last_change":"2026-03-10T08:36:18.428674+0000","last_active":"2026-03-10T08:36:21.899043+0000","last_peered":"2026-03-10T08:36:21.899043+0000","last_clean":"2026-03-10T08:36:21.899043+0000","last_became_active":"2026-03-10T08:36:18.428583+0000","last_became_peered":"2026-03-10T08:36:18.428583+0000","last_unstale":"2026-03-10T08:36:21.899043+0000","last_undegraded":"2026-03-10T08:36:21.899043+0000","last_fullsized":"2026-03-10T08:36:21.899043+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_clean_scrub_stamp":"2026-03-10T08:36:17.394330+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:08:41.688385+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,5],"acting":[3,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.1a","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.404378+0000","last_change":"2026-03-10T08:36:12.412700+0000","last_active":"2026-03-10T08:36:21.404378+0000","last_peered":"2026-03-10T08:36:21.404378+0000","last_clean":"2026-03-10T08:36:21.404378+0000","last_became_active":"2026-03-10T08:36:12.412581+0000","last_became_peered":"2026-03-10T08:36:12.412581+0000","last_unstale":"2026-03-10T08:36:21.404378+0000","last_undegraded":"2026-03-10T08:36:21.404378+0000","last_fullsized":"2026-03-10T08:36:21.404378+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:42:24.337857+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,1,2],"acting":[4,1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"4.1d","version":"54'12","reported_seq":46,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.899084+0000","last_change":"2026-03-10T08:36:14.414474+0000","last_active":"2026-03-10T08:36:21.899084+0000","last_peered":"2026-03-10T08:36:21.899084+0000","last_clean":"2026-03-10T08:36:21.899084+0000","last_became_active":"2026-03-10T08:36:14.414334+0000","last_became_peered":"2026-03-10T08:36:14.414334+0000","last_unstale":"2026-03-10T08:36:21.899084+0000","last_undegraded":"2026-03-10T08:36:21.899084+0000","last_fullsized":"2026-03-10T08:36:21.899084+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_clean_scrub_stamp":"2026-03-10T08:36:13.382053+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:59:09.761735+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":25,"num_read_kb":16,"num_write":14,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,4],"acting":[3,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.1c","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.404330+0000","last_change":"2026-03-10T08:36:16.431922+0000","last_active":"2026-03-10T08:36:21.404330+0000","last_peered":"2026-03-10T08:36:21.404330+0000","last_clean":"2026-03-10T08:36:21.404330+0000","last_became_active":"2026-03-10T08:36:16.431650+0000","last_became_peered":"2026-03-10T08:36:16.431650+0000","last_unstale":"2026-03-10T08:36:21.404330+0000","last_undegraded":"2026-03-10T08:36:21.404330+0000","last_fullsized":"2026-03-10T08:36:21.404330+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_clean_scrub_stamp":"2026-03-10T08:36:15.387844+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:26:25.723031+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,2],"acting":[4,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"6.1c","version":"54'1","reported_seq":16,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.403285+0000","last_change":"2026-03-10T08:36:18.427858+0000","last_active":"2026-03-10T08:36:21.403285+0000","last_peered":"2026-03-10T08:36:21.403285+0000","last_clean":"2026-03-10T08:36:21.403285+0000","last_became_active":"2026-03-10T08:36:18.418006+0000","last_became_peered":"2026-03-10T08:36:18.418006+0000","last_unstale":"2026-03-10T08:36:21.403285+0000","last_undegraded":"2026-03-10T08:36:21.403285+0000","last_fullsized":"2026-03-10T08:36:21.403285+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_clean_scrub_stamp":"2026-03-10T08:36:17.394330+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:05:56.566393+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":403,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,5,2],"acting":[7,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.19","version":"47'2","reported_seq":34,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.892700+0000","last_change":"2026-03-10T08:36:12.405689+0000","last_active":"2026-03-10T08:36:21.892700+0000","last_peered":"2026-03-10T08:36:21.892700+0000","last_clean":"2026-03-10T08:36:21.892700+0000","last_became_active":"2026-03-10T08:36:12.398785+0000","last_became_peered":"2026-03-10T08:36:12.398785+0000","last_unstale":"2026-03-10T08:36:21.892700+0000","last_undegraded":"2026-03-10T08:36:21.892700+0000","last_fullsized":"2026-03-10T08:36:21.892700+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:55:35.563142+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":1039,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":7,"num_read_kb":7,"num_write":3,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,3,4],"acting":[1,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"4.1e","version":"54'10","reported_seq":38,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.880855+0000","last_change":"2026-03-10T08:36:14.478003+0000","last_active":"2026-03-10T08:36:21.880855+0000","last_peered":"2026-03-10T08:36:21.880855+0000","last_clean":"2026-03-10T08:36:21.880855+0000","last_became_active":"2026-03-10T08:36:14.477861+0000","last_became_peered":"2026-03-10T08:36:14.477861+0000","last_unstale":"2026-03-10T08:36:21.880855+0000","last_undegraded":"2026-03-10T08:36:21.880855+0000","last_fullsized":"2026-03-10T08:36:21.880855+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_clean_scrub_stamp":"2026-03-10T08:36:13.382053+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:54:25.539408+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,3],"acting":[0,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.1f","version":"54'8","reported_seq":33,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.453178+0000","last_change":"2026-03-10T08:36:16.433348+0000","last_active":"2026-03-10T08:36:21.453178+0000","last_peered":"2026-03-10T08:36:21.453178+0000","last_clean":"2026-03-10T08:36:21.453178+0000","last_became_active":"2026-03-10T08:36:16.433094+0000","last_became_peered":"2026-03-10T08:36:16.433094+0000","last_unstale":"2026-03-10T08:36:21.453178+0000","last_undegraded":"2026-03-10T08:36:21.453178+0000","last_fullsized":"2026-03-10T08:36:21.453178+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_clean_scrub_stamp":"2026-03-10T08:36:15.387844+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T09:51:12.897388+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"4.f","version":"54'15","reported_seq":48,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.892324+0000","last_change":"2026-03-10T08:36:14.415127+0000","last_active":"2026-03-10T08:36:21.892324+0000","last_peered":"2026-03-10T08:36:21.892324+0000","last_clean":"2026-03-10T08:36:21.892324+0000","last_became_active":"2026-03-10T08:36:14.415016+0000","last_became_peered":"2026-03-10T08:36:14.415016+0000","last_unstale":"2026-03-10T08:36:21.892324+0000","last_undegraded":"2026-03-10T08:36:21.892324+0000","last_fullsized":"2026-03-10T08:36:21.892324+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_clean_scrub_stamp":"2026-03-10T08:36:13.382053+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:37:00.346418+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,3,4],"acting":[1,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.8","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.886622+0000","last_change":"2026-03-10T08:36:12.423503+0000","last_active":"2026-03-10T08:36:21.886622+0000","last_peered":"2026-03-10T08:36:21.886622+0000","last_clean":"2026-03-10T08:36:21.886622+0000","last_became_active":"2026-03-10T08:36:12.421929+0000","last_became_peered":"2026-03-10T08:36:12.421929+0000","last_unstale":"2026-03-10T08:36:21.886622+0000","last_undegraded":"2026-03-10T08:36:21.886622+0000","last_fullsized":"2026-03-10T08:36:21.886622+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:15:35.769167+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,7],"acting":[3,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.e","version":"54'8","reported_seq":29,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.404156+0000","last_change":"2026-03-10T08:36:16.440418+0000","last_active":"2026-03-10T08:36:21.404156+0000","last_peered":"2026-03-10T08:36:21.404156+0000","last_clean":"2026-03-10T08:36:21.404156+0000","last_became_active":"2026-03-10T08:36:16.440320+0000","last_became_peered":"2026-03-10T08:36:16.440320+0000","last_unstale":"2026-03-10T08:36:21.404156+0000","last_undegraded":"2026-03-10T08:36:21.404156+0000","last_fullsized":"2026-03-10T08:36:21.404156+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_clean_scrub_stamp":"2026-03-10T08:36:15.387844+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:35:15.855610+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,0],"acting":[4,5,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"6.d","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.393359+0000","last_change":"2026-03-10T08:36:18.440872+0000","last_active":"2026-03-10T08:36:21.393359+0000","last_peered":"2026-03-10T08:36:21.393359+0000","last_clean":"2026-03-10T08:36:21.393359+0000","last_became_active":"2026-03-10T08:36:18.440734+0000","last_became_peered":"2026-03-10T08:36:18.440734+0000","last_unstale":"2026-03-10T08:36:21.393359+0000","last_undegraded":"2026-03-10T08:36:21.393359+0000","last_fullsized":"2026-03-10T08:36:21.393359+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_clean_scrub_stamp":"2026-03-10T08:36:17.394330+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:07:57.062169+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"4.0","version":"54'18","reported_seq":55,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.886766+0000","last_change":"2026-03-10T08:36:14.475055+0000","last_active":"2026-03-10T08:36:21.886766+0000","last_peered":"2026-03-10T08:36:21.886766+0000","last_clean":"2026-03-10T08:36:21.886766+0000","last_became_active":"2026-03-10T08:36:14.474922+0000","last_became_peered":"2026-03-10T08:36:14.474922+0000","last_unstale":"2026-03-10T08:36:21.886766+0000","last_undegraded":"2026-03-10T08:36:21.886766+0000","last_fullsized":"2026-03-10T08:36:21.886766+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_clean_scrub_stamp":"2026-03-10T08:36:13.382053+0000","objects_scrubbed":0,"log_size":18,"log_dups_size":0,"ondisk_log_size":18,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:56:13.952057+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":34,"num_read_kb":22,"num_write":20,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,0],"acting":[3,7,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.7","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.886741+0000","last_change":"2026-03-10T08:36:12.424745+0000","last_active":"2026-03-10T08:36:21.886741+0000","last_peered":"2026-03-10T08:36:21.886741+0000","last_clean":"2026-03-10T08:36:21.886741+0000","last_became_active":"2026-03-10T08:36:12.424617+0000","last_became_peered":"2026-03-10T08:36:12.424617+0000","last_unstale":"2026-03-10T08:36:21.886741+0000","last_undegraded":"2026-03-10T08:36:21.886741+0000","last_fullsized":"2026-03-10T08:36:21.886741+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:27:57.626635+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,0],"acting":[3,7,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.1","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.404156+0000","last_change":"2026-03-10T08:36:16.431977+0000","last_active":"2026-03-10T08:36:21.404156+0000","last_peered":"2026-03-10T08:36:21.404156+0000","last_clean":"2026-03-10T08:36:21.404156+0000","last_became_active":"2026-03-10T08:36:16.431764+0000","last_became_peered":"2026-03-10T08:36:16.431764+0000","last_unstale":"2026-03-10T08:36:21.404156+0000","last_undegraded":"2026-03-10T08:36:21.404156+0000","last_fullsized":"2026-03-10T08:36:21.404156+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_clean_scrub_stamp":"2026-03-10T08:36:15.387844+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:23:10.636521+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,7],"acting":[4,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"6.2","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.404480+0000","last_change":"2026-03-10T08:36:18.433176+0000","last_active":"2026-03-10T08:36:21.404480+0000","last_peered":"2026-03-10T08:36:21.404480+0000","last_clean":"2026-03-10T08:36:21.404480+0000","last_became_active":"2026-03-10T08:36:18.433068+0000","last_became_peered":"2026-03-10T08:36:18.433068+0000","last_unstale":"2026-03-10T08:36:21.404480+0000","last_undegraded":"2026-03-10T08:36:21.404480+0000","last_fullsized":"2026-03-10T08:36:21.404480+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_clean_scrub_stamp":"2026-03-10T08:36:17.394330+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:00:55.564867+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,2],"acting":[4,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"4.1","version":"54'14","reported_seq":44,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.404922+0000","last_change":"2026-03-10T08:36:14.404265+0000","last_active":"2026-03-10T08:36:21.404922+0000","last_peered":"2026-03-10T08:36:21.404922+0000","last_clean":"2026-03-10T08:36:21.404922+0000","last_became_active":"2026-03-10T08:36:14.403568+0000","last_became_peered":"2026-03-10T08:36:14.403568+0000","last_unstale":"2026-03-10T08:36:21.404922+0000","last_undegraded":"2026-03-10T08:36:21.404922+0000","last_fullsized":"2026-03-10T08:36:21.404922+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_clean_scrub_stamp":"2026-03-10T08:36:13.382053+0000","objects_scrubbed":0,"log_size":14,"log_dups_size":0,"ondisk_log_size":14,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:12:17.520071+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":21,"num_read_kb":14,"num_write":14,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,6],"acting":[4,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.6","version":"47'1","reported_seq":28,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.880206+0000","last_change":"2026-03-10T08:36:12.416793+0000","last_active":"2026-03-10T08:36:21.880206+0000","last_peered":"2026-03-10T08:36:21.880206+0000","last_clean":"2026-03-10T08:36:21.880206+0000","last_became_active":"2026-03-10T08:36:12.416718+0000","last_became_peered":"2026-03-10T08:36:12.416718+0000","last_unstale":"2026-03-10T08:36:21.880206+0000","last_undegraded":"2026-03-10T08:36:21.880206+0000","last_fullsized":"2026-03-10T08:36:21.880206+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:32:22.954897+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":46,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,4],"acting":[0,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.0","version":"54'8","reported_seq":29,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.886061+0000","last_change":"2026-03-10T08:36:16.426703+0000","last_active":"2026-03-10T08:36:21.886061+0000","last_peered":"2026-03-10T08:36:21.886061+0000","last_clean":"2026-03-10T08:36:21.886061+0000","last_became_active":"2026-03-10T08:36:16.426604+0000","last_became_peered":"2026-03-10T08:36:16.426604+0000","last_unstale":"2026-03-10T08:36:21.886061+0000","last_undegraded":"2026-03-10T08:36:21.886061+0000","last_fullsized":"2026-03-10T08:36:21.886061+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_clean_scrub_stamp":"2026-03-10T08:36:15.387844+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:49:47.407593+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,4],"acting":[3,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.3","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.398896+0000","last_change":"2026-03-10T08:36:18.449523+0000","last_active":"2026-03-10T08:36:21.398896+0000","last_peered":"2026-03-10T08:36:21.398896+0000","last_clean":"2026-03-10T08:36:21.398896+0000","last_became_active":"2026-03-10T08:36:18.437774+0000","last_became_peered":"2026-03-10T08:36:18.437774+0000","last_unstale":"2026-03-10T08:36:21.398896+0000","last_undegraded":"2026-03-10T08:36:21.398896+0000","last_fullsized":"2026-03-10T08:36:21.398896+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_clean_scrub_stamp":"2026-03-10T08:36:17.394330+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:53:29.874022+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,2],"acting":[7,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"4.2","version":"54'10","reported_seq":38,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.893369+0000","last_change":"2026-03-10T08:36:14.409167+0000","last_active":"2026-03-10T08:36:21.893369+0000","last_peered":"2026-03-10T08:36:21.893369+0000","last_clean":"2026-03-10T08:36:21.893369+0000","last_became_active":"2026-03-10T08:36:14.409028+0000","last_became_peered":"2026-03-10T08:36:14.409028+0000","last_unstale":"2026-03-10T08:36:21.893369+0000","last_undegraded":"2026-03-10T08:36:21.893369+0000","last_fullsized":"2026-03-10T08:36:21.893369+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_clean_scrub_stamp":"2026-03-10T08:36:13.382053+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:27:04.902781+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,4],"acting":[1,5,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.5","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.394058+0000","last_change":"2026-03-10T08:36:12.413156+0000","last_active":"2026-03-10T08:36:21.394058+0000","last_peered":"2026-03-10T08:36:21.394058+0000","last_clean":"2026-03-10T08:36:21.394058+0000","last_became_active":"2026-03-10T08:36:12.412898+0000","last_became_peered":"2026-03-10T08:36:12.412898+0000","last_unstale":"2026-03-10T08:36:21.394058+0000","last_undegraded":"2026-03-10T08:36:21.394058+0000","last_fullsized":"2026-03-10T08:36:21.394058+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:54:54.660675+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,2],"acting":[5,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.3","version":"54'8","reported_seq":29,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.880907+0000","last_change":"2026-03-10T08:36:16.429943+0000","last_active":"2026-03-10T08:36:21.880907+0000","last_peered":"2026-03-10T08:36:21.880907+0000","last_clean":"2026-03-10T08:36:21.880907+0000","last_became_active":"2026-03-10T08:36:16.429841+0000","last_became_peered":"2026-03-10T08:36:16.429841+0000","last_unstale":"2026-03-10T08:36:21.880907+0000","last_undegraded":"2026-03-10T08:36:21.880907+0000","last_fullsized":"2026-03-10T08:36:21.880907+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_clean_scrub_stamp":"2026-03-10T08:36:15.387844+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:07:24.520669+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,6,5],"acting":[0,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"6.0","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.880883+0000","last_change":"2026-03-10T08:36:18.421985+0000","last_active":"2026-03-10T08:36:21.880883+0000","last_peered":"2026-03-10T08:36:21.880883+0000","last_clean":"2026-03-10T08:36:21.880883+0000","last_became_active":"2026-03-10T08:36:18.421916+0000","last_became_peered":"2026-03-10T08:36:18.421916+0000","last_unstale":"2026-03-10T08:36:21.880883+0000","last_undegraded":"2026-03-10T08:36:21.880883+0000","last_fullsized":"2026-03-10T08:36:21.880883+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_clean_scrub_stamp":"2026-03-10T08:36:17.394330+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:11:14.770808+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,3,2],"acting":[0,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"4.3","version":"54'19","reported_seq":59,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.880626+0000","last_change":"2026-03-10T08:36:14.477300+0000","last_active":"2026-03-10T08:36:21.880626+0000","last_peered":"2026-03-10T08:36:21.880626+0000","last_clean":"2026-03-10T08:36:21.880626+0000","last_became_active":"2026-03-10T08:36:14.477175+0000","last_became_peered":"2026-03-10T08:36:14.477175+0000","last_unstale":"2026-03-10T08:36:21.880626+0000","last_undegraded":"2026-03-10T08:36:21.880626+0000","last_fullsized":"2026-03-10T08:36:21.880626+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_clean_scrub_stamp":"2026-03-10T08:36:13.382053+0000","objects_scrubbed":0,"log_size":19,"log_dups_size":0,"ondisk_log_size":19,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:30:17.354498+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":330,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":39,"num_read_kb":25,"num_write":22,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,7],"acting":[0,5,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.4","version":"47'1","reported_seq":33,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.892753+0000","last_change":"2026-03-10T08:36:12.406429+0000","last_active":"2026-03-10T08:36:21.892753+0000","last_peered":"2026-03-10T08:36:21.892753+0000","last_clean":"2026-03-10T08:36:21.892753+0000","last_became_active":"2026-03-10T08:36:12.406085+0000","last_became_peered":"2026-03-10T08:36:12.406085+0000","last_unstale":"2026-03-10T08:36:21.892753+0000","last_undegraded":"2026-03-10T08:36:21.892753+0000","last_fullsized":"2026-03-10T08:36:21.892753+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:29:15.068080+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":436,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":7,"num_read_kb":7,"num_write":2,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2,5],"acting":[1,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.2","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.453127+0000","last_change":"2026-03-10T08:36:16.426755+0000","last_active":"2026-03-10T08:36:21.453127+0000","last_peered":"2026-03-10T08:36:21.453127+0000","last_clean":"2026-03-10T08:36:21.453127+0000","last_became_active":"2026-03-10T08:36:16.422838+0000","last_became_peered":"2026-03-10T08:36:16.422838+0000","last_unstale":"2026-03-10T08:36:21.453127+0000","last_undegraded":"2026-03-10T08:36:21.453127+0000","last_fullsized":"2026-03-10T08:36:21.453127+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_clean_scrub_stamp":"2026-03-10T08:36:15.387844+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:43:39.844050+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,0,5],"acting":[6,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"6.1","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.892728+0000","last_change":"2026-03-10T08:36:18.421116+0000","last_active":"2026-03-10T08:36:21.892728+0000","last_peered":"2026-03-10T08:36:21.892728+0000","last_clean":"2026-03-10T08:36:21.892728+0000","last_became_active":"2026-03-10T08:36:18.420688+0000","last_became_peered":"2026-03-10T08:36:18.420688+0000","last_unstale":"2026-03-10T08:36:21.892728+0000","last_undegraded":"2026-03-10T08:36:21.892728+0000","last_fullsized":"2026-03-10T08:36:21.892728+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_clean_scrub_stamp":"2026-03-10T08:36:17.394330+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:14:05.072286+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,6,2],"acting":[1,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"4.4","version":"54'28","reported_seq":74,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.892681+0000","last_change":"2026-03-10T08:36:14.415292+0000","last_active":"2026-03-10T08:36:21.892681+0000","last_peered":"2026-03-10T08:36:21.892681+0000","last_clean":"2026-03-10T08:36:21.892681+0000","last_became_active":"2026-03-10T08:36:14.415201+0000","last_became_peered":"2026-03-10T08:36:14.415201+0000","last_unstale":"2026-03-10T08:36:21.892681+0000","last_undegraded":"2026-03-10T08:36:21.892681+0000","last_fullsized":"2026-03-10T08:36:21.892681+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_clean_scrub_stamp":"2026-03-10T08:36:13.382053+0000","objects_scrubbed":0,"log_size":28,"log_dups_size":0,"ondisk_log_size":28,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T09:33:51.775265+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":358,"num_objects":10,"num_object_clones":0,"num_object_copies":30,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":10,"num_whiteouts":0,"num_read":48,"num_read_kb":33,"num_write":26,"num_write_kb":4,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2,3],"acting":[1,2,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.3","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.404597+0000","last_change":"2026-03-10T08:36:12.410926+0000","last_active":"2026-03-10T08:36:21.404597+0000","last_peered":"2026-03-10T08:36:21.404597+0000","last_clean":"2026-03-10T08:36:21.404597+0000","last_became_active":"2026-03-10T08:36:12.410423+0000","last_became_peered":"2026-03-10T08:36:12.410423+0000","last_unstale":"2026-03-10T08:36:21.404597+0000","last_undegraded":"2026-03-10T08:36:21.404597+0000","last_fullsized":"2026-03-10T08:36:21.404597+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:02:10.204754+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,0,6],"acting":[4,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.2","version":"49'2","reported_seq":34,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.394147+0000","last_change":"2026-03-10T08:36:14.393754+0000","last_active":"2026-03-10T08:36:21.394147+0000","last_peered":"2026-03-10T08:36:21.394147+0000","last_clean":"2026-03-10T08:36:21.394147+0000","last_became_active":"2026-03-10T08:36:12.406162+0000","last_became_peered":"2026-03-10T08:36:12.406162+0000","last_unstale":"2026-03-10T08:36:21.394147+0000","last_undegraded":"2026-03-10T08:36:21.394147+0000","last_fullsized":"2026-03-10T08:36:21.394147+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T09:04:51.974136+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00041008700000000001,"stat_sum":{"num_bytes":19,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,6],"acting":[5,1,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.5","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.880458+0000","last_change":"2026-03-10T08:36:16.425935+0000","last_active":"2026-03-10T08:36:21.880458+0000","last_peered":"2026-03-10T08:36:21.880458+0000","last_clean":"2026-03-10T08:36:21.880458+0000","last_became_active":"2026-03-10T08:36:16.425847+0000","last_became_peered":"2026-03-10T08:36:16.425847+0000","last_unstale":"2026-03-10T08:36:21.880458+0000","last_undegraded":"2026-03-10T08:36:21.880458+0000","last_fullsized":"2026-03-10T08:36:21.880458+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_clean_scrub_stamp":"2026-03-10T08:36:15.387844+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:27:44.252713+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,4],"acting":[0,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"6.6","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.886089+0000","last_change":"2026-03-10T08:36:18.453202+0000","last_active":"2026-03-10T08:36:21.886089+0000","last_peered":"2026-03-10T08:36:21.886089+0000","last_clean":"2026-03-10T08:36:21.886089+0000","last_became_active":"2026-03-10T08:36:18.453111+0000","last_became_peered":"2026-03-10T08:36:18.453111+0000","last_unstale":"2026-03-10T08:36:21.886089+0000","last_undegraded":"2026-03-10T08:36:21.886089+0000","last_fullsized":"2026-03-10T08:36:21.886089+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_clean_scrub_stamp":"2026-03-10T08:36:17.394330+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:53:48.051855+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,4,7],"acting":[3,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"4.7","version":"54'13","reported_seq":50,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.892780+0000","last_change":"2026-03-10T08:36:14.423735+0000","last_active":"2026-03-10T08:36:21.892780+0000","last_peered":"2026-03-10T08:36:21.892780+0000","last_clean":"2026-03-10T08:36:21.892780+0000","last_became_active":"2026-03-10T08:36:14.423167+0000","last_became_peered":"2026-03-10T08:36:14.423167+0000","last_unstale":"2026-03-10T08:36:21.892780+0000","last_undegraded":"2026-03-10T08:36:21.892780+0000","last_fullsized":"2026-03-10T08:36:21.892780+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_clean_scrub_stamp":"2026-03-10T08:36:13.382053+0000","objects_scrubbed":0,"log_size":13,"log_dups_size":0,"ondisk_log_size":13,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:42:20.318241+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":330,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":30,"num_read_kb":19,"num_write":16,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,0],"acting":[1,5,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.0","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.892807+0000","last_change":"2026-03-10T08:36:12.408080+0000","last_active":"2026-03-10T08:36:21.892807+0000","last_peered":"2026-03-10T08:36:21.892807+0000","last_clean":"2026-03-10T08:36:21.892807+0000","last_became_active":"2026-03-10T08:36:12.407879+0000","last_became_peered":"2026-03-10T08:36:12.407879+0000","last_unstale":"2026-03-10T08:36:21.892807+0000","last_undegraded":"2026-03-10T08:36:21.892807+0000","last_fullsized":"2026-03-10T08:36:21.892807+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:02:43.812546+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2,6],"acting":[1,2,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.1","version":"47'1","reported_seq":33,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.884008+0000","last_change":"2026-03-10T08:36:14.392483+0000","last_active":"2026-03-10T08:36:21.884008+0000","last_peered":"2026-03-10T08:36:21.884008+0000","last_clean":"2026-03-10T08:36:21.884008+0000","last_became_active":"2026-03-10T08:36:12.412540+0000","last_became_peered":"2026-03-10T08:36:12.412540+0000","last_unstale":"2026-03-10T08:36:21.884008+0000","last_undegraded":"2026-03-10T08:36:21.884008+0000","last_fullsized":"2026-03-10T08:36:21.884008+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:05:05.905712+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.000361837,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,3,0],"acting":[2,3,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.6","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.884071+0000","last_change":"2026-03-10T08:36:16.421439+0000","last_active":"2026-03-10T08:36:21.884071+0000","last_peered":"2026-03-10T08:36:21.884071+0000","last_clean":"2026-03-10T08:36:21.884071+0000","last_became_active":"2026-03-10T08:36:16.421242+0000","last_became_peered":"2026-03-10T08:36:16.421242+0000","last_unstale":"2026-03-10T08:36:21.884071+0000","last_undegraded":"2026-03-10T08:36:21.884071+0000","last_fullsized":"2026-03-10T08:36:21.884071+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_clean_scrub_stamp":"2026-03-10T08:36:15.387844+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:00:23.387729+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,5,7],"acting":[2,5,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.5","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.399063+0000","last_change":"2026-03-10T08:36:18.437540+0000","last_active":"2026-03-10T08:36:21.399063+0000","last_peered":"2026-03-10T08:36:21.399063+0000","last_clean":"2026-03-10T08:36:21.399063+0000","last_became_active":"2026-03-10T08:36:18.428400+0000","last_became_peered":"2026-03-10T08:36:18.428400+0000","last_unstale":"2026-03-10T08:36:21.399063+0000","last_undegraded":"2026-03-10T08:36:21.399063+0000","last_fullsized":"2026-03-10T08:36:21.399063+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_clean_scrub_stamp":"2026-03-10T08:36:17.394330+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:11:41.308195+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,3],"acting":[7,6,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"4.6","version":"54'12","reported_seq":41,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.880278+0000","last_change":"2026-03-10T08:36:14.417982+0000","last_active":"2026-03-10T08:36:21.880278+0000","last_peered":"2026-03-10T08:36:21.880278+0000","last_clean":"2026-03-10T08:36:21.880278+0000","last_became_active":"2026-03-10T08:36:14.417677+0000","last_became_peered":"2026-03-10T08:36:14.417677+0000","last_unstale":"2026-03-10T08:36:21.880278+0000","last_undegraded":"2026-03-10T08:36:21.880278+0000","last_fullsized":"2026-03-10T08:36:21.880278+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_clean_scrub_stamp":"2026-03-10T08:36:13.382053+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:59:30.405880+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":6,"num_object_clones":0,"num_object_copies":18,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":6,"num_whiteouts":0,"num_read":18,"num_read_kb":12,"num_write":12,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,2],"acting":[0,1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.1","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.880252+0000","last_change":"2026-03-10T08:36:12.405451+0000","last_active":"2026-03-10T08:36:21.880252+0000","last_peered":"2026-03-10T08:36:21.880252+0000","last_clean":"2026-03-10T08:36:21.880252+0000","last_became_active":"2026-03-10T08:36:12.405128+0000","last_became_peered":"2026-03-10T08:36:12.405128+0000","last_unstale":"2026-03-10T08:36:21.880252+0000","last_undegraded":"2026-03-10T08:36:21.880252+0000","last_fullsized":"2026-03-10T08:36:21.880252+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T09:46:16.990290+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,4,3],"acting":[0,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"2.0","version":"54'5","reported_seq":41,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.399553+0000","last_change":"2026-03-10T08:36:14.474567+0000","last_active":"2026-03-10T08:36:21.399553+0000","last_peered":"2026-03-10T08:36:21.399553+0000","last_clean":"2026-03-10T08:36:21.399553+0000","last_became_active":"2026-03-10T08:36:12.408655+0000","last_became_peered":"2026-03-10T08:36:12.408655+0000","last_unstale":"2026-03-10T08:36:21.399553+0000","last_undegraded":"2026-03-10T08:36:21.399553+0000","last_fullsized":"2026-03-10T08:36:21.399553+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":5,"log_dups_size":0,"ondisk_log_size":5,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:54:32.220383+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.0039831160000000001,"stat_sum":{"num_bytes":389,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":6,"num_read_kb":1,"num_write":4,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,1,0],"acting":[7,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.7","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.393825+0000","last_change":"2026-03-10T08:36:16.416970+0000","last_active":"2026-03-10T08:36:21.393825+0000","last_peered":"2026-03-10T08:36:21.393825+0000","last_clean":"2026-03-10T08:36:21.393825+0000","last_became_active":"2026-03-10T08:36:16.416699+0000","last_became_peered":"2026-03-10T08:36:16.416699+0000","last_unstale":"2026-03-10T08:36:21.393825+0000","last_undegraded":"2026-03-10T08:36:21.393825+0000","last_fullsized":"2026-03-10T08:36:21.393825+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_clean_scrub_stamp":"2026-03-10T08:36:15.387844+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:12:03.436428+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.4","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.892476+0000","last_change":"2026-03-10T08:36:18.421216+0000","last_active":"2026-03-10T08:36:21.892476+0000","last_peered":"2026-03-10T08:36:21.892476+0000","last_clean":"2026-03-10T08:36:21.892476+0000","last_became_active":"2026-03-10T08:36:18.421044+0000","last_became_peered":"2026-03-10T08:36:18.421044+0000","last_unstale":"2026-03-10T08:36:21.892476+0000","last_undegraded":"2026-03-10T08:36:21.892476+0000","last_fullsized":"2026-03-10T08:36:21.892476+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_clean_scrub_stamp":"2026-03-10T08:36:17.394330+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:50:29.875735+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,3],"acting":[1,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"4.5","version":"54'16","reported_seq":48,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.452247+0000","last_change":"2026-03-10T08:36:14.471730+0000","last_active":"2026-03-10T08:36:21.452247+0000","last_peered":"2026-03-10T08:36:21.452247+0000","last_clean":"2026-03-10T08:36:21.452247+0000","last_became_active":"2026-03-10T08:36:14.471564+0000","last_became_peered":"2026-03-10T08:36:14.471564+0000","last_unstale":"2026-03-10T08:36:21.452247+0000","last_undegraded":"2026-03-10T08:36:21.452247+0000","last_fullsized":"2026-03-10T08:36:21.452247+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_clean_scrub_stamp":"2026-03-10T08:36:13.382053+0000","objects_scrubbed":0,"log_size":16,"log_dups_size":0,"ondisk_log_size":16,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:04:39.245478+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":154,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":25,"num_read_kb":15,"num_write":13,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"3.2","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.886831+0000","last_change":"2026-03-10T08:36:12.412829+0000","last_active":"2026-03-10T08:36:21.886831+0000","last_peered":"2026-03-10T08:36:21.886831+0000","last_clean":"2026-03-10T08:36:21.886831+0000","last_became_active":"2026-03-10T08:36:12.412635+0000","last_became_peered":"2026-03-10T08:36:12.412635+0000","last_unstale":"2026-03-10T08:36:21.886831+0000","last_undegraded":"2026-03-10T08:36:21.886831+0000","last_fullsized":"2026-03-10T08:36:21.886831+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:23:44.228885+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,5,6],"acting":[3,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"1.0","version":"18'32","reported_seq":37,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.402810+0000","last_change":"2026-03-10T08:36:10.389035+0000","last_active":"2026-03-10T08:36:21.402810+0000","last_peered":"2026-03-10T08:36:21.402810+0000","last_clean":"2026-03-10T08:36:21.402810+0000","last_became_active":"2026-03-10T08:36:10.382291+0000","last_became_peered":"2026-03-10T08:36:10.382291+0000","last_unstale":"2026-03-10T08:36:21.402810+0000","last_undegraded":"2026-03-10T08:36:21.402810+0000","last_fullsized":"2026-03-10T08:36:21.402810+0000","mapping_epoch":44,"log_start":"0'0","ondisk_log_start":"0'0","created":17,"last_epoch_clean":45,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:35:15.259757+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:35:15.259757+0000","last_clean_scrub_stamp":"2026-03-10T08:35:15.259757+0000","objects_scrubbed":0,"log_size":32,"log_dups_size":0,"ondisk_log_size":32,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:10:56.579708+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0,6],"acting":[7,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.4","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.402831+0000","last_change":"2026-03-10T08:36:16.438526+0000","last_active":"2026-03-10T08:36:21.402831+0000","last_peered":"2026-03-10T08:36:21.402831+0000","last_clean":"2026-03-10T08:36:21.402831+0000","last_became_active":"2026-03-10T08:36:16.438375+0000","last_became_peered":"2026-03-10T08:36:16.438375+0000","last_unstale":"2026-03-10T08:36:21.402831+0000","last_undegraded":"2026-03-10T08:36:21.402831+0000","last_fullsized":"2026-03-10T08:36:21.402831+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_clean_scrub_stamp":"2026-03-10T08:36:15.387844+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:00:48.852407+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,5],"acting":[7,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.7","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.393470+0000","last_change":"2026-03-10T08:36:18.446681+0000","last_active":"2026-03-10T08:36:21.393470+0000","last_peered":"2026-03-10T08:36:21.393470+0000","last_clean":"2026-03-10T08:36:21.393470+0000","last_became_active":"2026-03-10T08:36:18.446574+0000","last_became_peered":"2026-03-10T08:36:18.446574+0000","last_unstale":"2026-03-10T08:36:21.393470+0000","last_undegraded":"2026-03-10T08:36:21.393470+0000","last_fullsized":"2026-03-10T08:36:21.393470+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_clean_scrub_stamp":"2026-03-10T08:36:17.394330+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:39:15.005470+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,4],"acting":[5,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"4.e","version":"54'11","reported_seq":42,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.404523+0000","last_change":"2026-03-10T08:36:14.411679+0000","last_active":"2026-03-10T08:36:21.404523+0000","last_peered":"2026-03-10T08:36:21.404523+0000","last_clean":"2026-03-10T08:36:21.404523+0000","last_became_active":"2026-03-10T08:36:14.411268+0000","last_became_peered":"2026-03-10T08:36:14.411268+0000","last_unstale":"2026-03-10T08:36:21.404523+0000","last_undegraded":"2026-03-10T08:36:21.404523+0000","last_fullsized":"2026-03-10T08:36:21.404523+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_clean_scrub_stamp":"2026-03-10T08:36:13.382053+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:34:20.224472+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,1],"acting":[4,6,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.9","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.404498+0000","last_change":"2026-03-10T08:36:12.404771+0000","last_active":"2026-03-10T08:36:21.404498+0000","last_peered":"2026-03-10T08:36:21.404498+0000","last_clean":"2026-03-10T08:36:21.404498+0000","last_became_active":"2026-03-10T08:36:12.404449+0000","last_became_peered":"2026-03-10T08:36:12.404449+0000","last_unstale":"2026-03-10T08:36:21.404498+0000","last_undegraded":"2026-03-10T08:36:21.404498+0000","last_fullsized":"2026-03-10T08:36:21.404498+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:24:33.726957+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,2,7],"acting":[4,2,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.f","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.394471+0000","last_change":"2026-03-10T08:36:16.434801+0000","last_active":"2026-03-10T08:36:21.394471+0000","last_peered":"2026-03-10T08:36:21.394471+0000","last_clean":"2026-03-10T08:36:21.394471+0000","last_became_active":"2026-03-10T08:36:16.434725+0000","last_became_peered":"2026-03-10T08:36:16.434725+0000","last_unstale":"2026-03-10T08:36:21.394471+0000","last_undegraded":"2026-03-10T08:36:21.394471+0000","last_fullsized":"2026-03-10T08:36:21.394471+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_clean_scrub_stamp":"2026-03-10T08:36:15.387844+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:19:51.741035+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,6],"acting":[5,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.c","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.885985+0000","last_change":"2026-03-10T08:36:18.428728+0000","last_active":"2026-03-10T08:36:21.885985+0000","last_peered":"2026-03-10T08:36:21.885985+0000","last_clean":"2026-03-10T08:36:21.885985+0000","last_became_active":"2026-03-10T08:36:18.428599+0000","last_became_peered":"2026-03-10T08:36:18.428599+0000","last_unstale":"2026-03-10T08:36:21.885985+0000","last_undegraded":"2026-03-10T08:36:21.885985+0000","last_fullsized":"2026-03-10T08:36:21.885985+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_clean_scrub_stamp":"2026-03-10T08:36:17.394330+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:08:50.956076+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,5],"acting":[3,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"4.d","version":"54'17","reported_seq":51,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.406399+0000","last_change":"2026-03-10T08:36:14.413530+0000","last_active":"2026-03-10T08:36:21.406399+0000","last_peered":"2026-03-10T08:36:21.406399+0000","last_clean":"2026-03-10T08:36:21.406399+0000","last_became_active":"2026-03-10T08:36:14.413376+0000","last_became_peered":"2026-03-10T08:36:14.413376+0000","last_unstale":"2026-03-10T08:36:21.406399+0000","last_undegraded":"2026-03-10T08:36:21.406399+0000","last_fullsized":"2026-03-10T08:36:21.406399+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_clean_scrub_stamp":"2026-03-10T08:36:13.382053+0000","objects_scrubbed":0,"log_size":17,"log_dups_size":0,"ondisk_log_size":17,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:11:24.816751+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":29,"num_read_kb":19,"num_write":18,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,2,1],"acting":[4,2,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.a","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.452590+0000","last_change":"2026-03-10T08:36:12.416243+0000","last_active":"2026-03-10T08:36:21.452590+0000","last_peered":"2026-03-10T08:36:21.452590+0000","last_clean":"2026-03-10T08:36:21.452590+0000","last_became_active":"2026-03-10T08:36:12.416152+0000","last_became_peered":"2026-03-10T08:36:12.416152+0000","last_unstale":"2026-03-10T08:36:21.452590+0000","last_undegraded":"2026-03-10T08:36:21.452590+0000","last_fullsized":"2026-03-10T08:36:21.452590+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:19:48.989936+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,1],"acting":[6,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"5.c","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.892186+0000","last_change":"2026-03-10T08:36:16.427876+0000","last_active":"2026-03-10T08:36:21.892186+0000","last_peered":"2026-03-10T08:36:21.892186+0000","last_clean":"2026-03-10T08:36:21.892186+0000","last_became_active":"2026-03-10T08:36:16.427772+0000","last_became_peered":"2026-03-10T08:36:16.427772+0000","last_unstale":"2026-03-10T08:36:21.892186+0000","last_undegraded":"2026-03-10T08:36:21.892186+0000","last_fullsized":"2026-03-10T08:36:21.892186+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_clean_scrub_stamp":"2026-03-10T08:36:15.387844+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:20:25.396468+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,4,0],"acting":[1,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.f","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.884133+0000","last_change":"2026-03-10T08:36:18.433698+0000","last_active":"2026-03-10T08:36:21.884133+0000","last_peered":"2026-03-10T08:36:21.884133+0000","last_clean":"2026-03-10T08:36:21.884133+0000","last_became_active":"2026-03-10T08:36:18.433591+0000","last_became_peered":"2026-03-10T08:36:18.433591+0000","last_unstale":"2026-03-10T08:36:21.884133+0000","last_undegraded":"2026-03-10T08:36:21.884133+0000","last_fullsized":"2026-03-10T08:36:21.884133+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_clean_scrub_stamp":"2026-03-10T08:36:17.394330+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:56:51.011977+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,3,4],"acting":[2,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"4.c","version":"54'10","reported_seq":38,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.406299+0000","last_change":"2026-03-10T08:36:14.409168+0000","last_active":"2026-03-10T08:36:21.406299+0000","last_peered":"2026-03-10T08:36:21.406299+0000","last_clean":"2026-03-10T08:36:21.406299+0000","last_became_active":"2026-03-10T08:36:14.408855+0000","last_became_peered":"2026-03-10T08:36:14.408855+0000","last_unstale":"2026-03-10T08:36:21.406299+0000","last_undegraded":"2026-03-10T08:36:21.406299+0000","last_fullsized":"2026-03-10T08:36:21.406299+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_clean_scrub_stamp":"2026-03-10T08:36:13.382053+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:04:03.612726+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,6],"acting":[4,3,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.b","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.886581+0000","last_change":"2026-03-10T08:36:12.423383+0000","last_active":"2026-03-10T08:36:21.886581+0000","last_peered":"2026-03-10T08:36:21.886581+0000","last_clean":"2026-03-10T08:36:21.886581+0000","last_became_active":"2026-03-10T08:36:12.420213+0000","last_became_peered":"2026-03-10T08:36:12.420213+0000","last_unstale":"2026-03-10T08:36:21.886581+0000","last_undegraded":"2026-03-10T08:36:21.886581+0000","last_fullsized":"2026-03-10T08:36:21.886581+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:31:54.743516+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,4],"acting":[3,0,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.d","version":"54'8","reported_seq":33,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.884311+0000","last_change":"2026-03-10T08:36:16.421500+0000","last_active":"2026-03-10T08:36:21.884311+0000","last_peered":"2026-03-10T08:36:21.884311+0000","last_clean":"2026-03-10T08:36:21.884311+0000","last_became_active":"2026-03-10T08:36:16.421361+0000","last_became_peered":"2026-03-10T08:36:16.421361+0000","last_unstale":"2026-03-10T08:36:21.884311+0000","last_undegraded":"2026-03-10T08:36:21.884311+0000","last_fullsized":"2026-03-10T08:36:21.884311+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_clean_scrub_stamp":"2026-03-10T08:36:15.387844+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:36:52.041137+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,7,5],"acting":[2,7,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.e","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.406252+0000","last_change":"2026-03-10T08:36:18.434947+0000","last_active":"2026-03-10T08:36:21.406252+0000","last_peered":"2026-03-10T08:36:21.406252+0000","last_clean":"2026-03-10T08:36:21.406252+0000","last_became_active":"2026-03-10T08:36:18.434860+0000","last_became_peered":"2026-03-10T08:36:18.434860+0000","last_unstale":"2026-03-10T08:36:21.406252+0000","last_undegraded":"2026-03-10T08:36:21.406252+0000","last_fullsized":"2026-03-10T08:36:21.406252+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_clean_scrub_stamp":"2026-03-10T08:36:17.394330+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T09:05:28.158200+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,1,2],"acting":[4,1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"4.b","version":"54'9","reported_seq":39,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.880695+0000","last_change":"2026-03-10T08:36:14.417922+0000","last_active":"2026-03-10T08:36:21.880695+0000","last_peered":"2026-03-10T08:36:21.880695+0000","last_clean":"2026-03-10T08:36:21.880695+0000","last_became_active":"2026-03-10T08:36:14.417566+0000","last_became_peered":"2026-03-10T08:36:14.417566+0000","last_unstale":"2026-03-10T08:36:21.880695+0000","last_undegraded":"2026-03-10T08:36:21.880695+0000","last_fullsized":"2026-03-10T08:36:21.880695+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_clean_scrub_stamp":"2026-03-10T08:36:13.382053+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:05:04.758549+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,4],"acting":[0,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.c","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.393957+0000","last_change":"2026-03-10T08:36:12.413266+0000","last_active":"2026-03-10T08:36:21.393957+0000","last_peered":"2026-03-10T08:36:21.393957+0000","last_clean":"2026-03-10T08:36:21.393957+0000","last_became_active":"2026-03-10T08:36:12.413050+0000","last_became_peered":"2026-03-10T08:36:12.413050+0000","last_unstale":"2026-03-10T08:36:21.393957+0000","last_undegraded":"2026-03-10T08:36:21.393957+0000","last_fullsized":"2026-03-10T08:36:21.393957+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:56:50.949792+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,6],"acting":[5,3,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.a","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.884217+0000","last_change":"2026-03-10T08:36:16.426219+0000","last_active":"2026-03-10T08:36:21.884217+0000","last_peered":"2026-03-10T08:36:21.884217+0000","last_clean":"2026-03-10T08:36:21.884217+0000","last_became_active":"2026-03-10T08:36:16.426149+0000","last_became_peered":"2026-03-10T08:36:16.426149+0000","last_unstale":"2026-03-10T08:36:21.884217+0000","last_undegraded":"2026-03-10T08:36:21.884217+0000","last_fullsized":"2026-03-10T08:36:21.884217+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_clean_scrub_stamp":"2026-03-10T08:36:15.387844+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:11:29.939825+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,4,3],"acting":[2,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.9","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.880669+0000","last_change":"2026-03-10T08:36:18.451537+0000","last_active":"2026-03-10T08:36:21.880669+0000","last_peered":"2026-03-10T08:36:21.880669+0000","last_clean":"2026-03-10T08:36:21.880669+0000","last_became_active":"2026-03-10T08:36:18.451466+0000","last_became_peered":"2026-03-10T08:36:18.451466+0000","last_unstale":"2026-03-10T08:36:21.880669+0000","last_undegraded":"2026-03-10T08:36:21.880669+0000","last_fullsized":"2026-03-10T08:36:21.880669+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_clean_scrub_stamp":"2026-03-10T08:36:17.394330+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:47:18.003501+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,2],"acting":[0,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"4.a","version":"54'19","reported_seq":54,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.452668+0000","last_change":"2026-03-10T08:36:14.471656+0000","last_active":"2026-03-10T08:36:21.452668+0000","last_peered":"2026-03-10T08:36:21.452668+0000","last_clean":"2026-03-10T08:36:21.452668+0000","last_became_active":"2026-03-10T08:36:14.471429+0000","last_became_peered":"2026-03-10T08:36:14.471429+0000","last_unstale":"2026-03-10T08:36:21.452668+0000","last_undegraded":"2026-03-10T08:36:21.452668+0000","last_fullsized":"2026-03-10T08:36:21.452668+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_clean_scrub_stamp":"2026-03-10T08:36:13.382053+0000","objects_scrubbed":0,"log_size":19,"log_dups_size":0,"ondisk_log_size":19,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:43:18.792865+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":9,"num_object_clones":0,"num_object_copies":27,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":9,"num_whiteouts":0,"num_read":32,"num_read_kb":21,"num_write":20,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,1,7],"acting":[6,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"3.d","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.402717+0000","last_change":"2026-03-10T08:36:12.407756+0000","last_active":"2026-03-10T08:36:21.402717+0000","last_peered":"2026-03-10T08:36:21.402717+0000","last_clean":"2026-03-10T08:36:21.402717+0000","last_became_active":"2026-03-10T08:36:12.407524+0000","last_became_peered":"2026-03-10T08:36:12.407524+0000","last_unstale":"2026-03-10T08:36:21.402717+0000","last_undegraded":"2026-03-10T08:36:21.402717+0000","last_fullsized":"2026-03-10T08:36:21.402717+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T09:39:18.816135+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,5,6],"acting":[7,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.b","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.884243+0000","last_change":"2026-03-10T08:36:16.419552+0000","last_active":"2026-03-10T08:36:21.884243+0000","last_peered":"2026-03-10T08:36:21.884243+0000","last_clean":"2026-03-10T08:36:21.884243+0000","last_became_active":"2026-03-10T08:36:16.419415+0000","last_became_peered":"2026-03-10T08:36:16.419415+0000","last_unstale":"2026-03-10T08:36:21.884243+0000","last_undegraded":"2026-03-10T08:36:21.884243+0000","last_fullsized":"2026-03-10T08:36:21.884243+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_clean_scrub_stamp":"2026-03-10T08:36:15.387844+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:20:21.971292+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,0,5],"acting":[2,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.8","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.402704+0000","last_change":"2026-03-10T08:36:18.437677+0000","last_active":"2026-03-10T08:36:21.402704+0000","last_peered":"2026-03-10T08:36:21.402704+0000","last_clean":"2026-03-10T08:36:21.402704+0000","last_became_active":"2026-03-10T08:36:18.428520+0000","last_became_peered":"2026-03-10T08:36:18.428520+0000","last_unstale":"2026-03-10T08:36:21.402704+0000","last_undegraded":"2026-03-10T08:36:21.402704+0000","last_fullsized":"2026-03-10T08:36:21.402704+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_clean_scrub_stamp":"2026-03-10T08:36:17.394330+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:04:44.619391+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,3],"acting":[7,2,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"4.9","version":"54'12","reported_seq":46,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.404808+0000","last_change":"2026-03-10T08:36:14.413066+0000","last_active":"2026-03-10T08:36:21.404808+0000","last_peered":"2026-03-10T08:36:21.404808+0000","last_clean":"2026-03-10T08:36:21.404808+0000","last_became_active":"2026-03-10T08:36:14.412541+0000","last_became_peered":"2026-03-10T08:36:14.412541+0000","last_unstale":"2026-03-10T08:36:21.404808+0000","last_undegraded":"2026-03-10T08:36:21.404808+0000","last_fullsized":"2026-03-10T08:36:21.404808+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_clean_scrub_stamp":"2026-03-10T08:36:13.382053+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T09:02:39.196023+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":25,"num_read_kb":16,"num_write":14,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,1,3],"acting":[4,1,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.e","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.399478+0000","last_change":"2026-03-10T08:36:12.408759+0000","last_active":"2026-03-10T08:36:21.399478+0000","last_peered":"2026-03-10T08:36:21.399478+0000","last_clean":"2026-03-10T08:36:21.399478+0000","last_became_active":"2026-03-10T08:36:12.408510+0000","last_became_peered":"2026-03-10T08:36:12.408510+0000","last_unstale":"2026-03-10T08:36:21.399478+0000","last_undegraded":"2026-03-10T08:36:21.399478+0000","last_fullsized":"2026-03-10T08:36:21.399478+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:41:04.656969+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,1],"acting":[7,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.8","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.884176+0000","last_change":"2026-03-10T08:36:16.419489+0000","last_active":"2026-03-10T08:36:21.884176+0000","last_peered":"2026-03-10T08:36:21.884176+0000","last_clean":"2026-03-10T08:36:21.884176+0000","last_became_active":"2026-03-10T08:36:16.419287+0000","last_became_peered":"2026-03-10T08:36:16.419287+0000","last_unstale":"2026-03-10T08:36:21.884176+0000","last_undegraded":"2026-03-10T08:36:21.884176+0000","last_fullsized":"2026-03-10T08:36:21.884176+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_clean_scrub_stamp":"2026-03-10T08:36:15.387844+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:41:36.647112+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,0,1],"acting":[2,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.b","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.886183+0000","last_change":"2026-03-10T08:36:18.453177+0000","last_active":"2026-03-10T08:36:21.886183+0000","last_peered":"2026-03-10T08:36:21.886183+0000","last_clean":"2026-03-10T08:36:21.886183+0000","last_became_active":"2026-03-10T08:36:18.453096+0000","last_became_peered":"2026-03-10T08:36:18.453096+0000","last_unstale":"2026-03-10T08:36:21.886183+0000","last_undegraded":"2026-03-10T08:36:21.886183+0000","last_fullsized":"2026-03-10T08:36:21.886183+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_clean_scrub_stamp":"2026-03-10T08:36:17.394330+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:18:05.297561+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,1],"acting":[3,7,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"4.8","version":"54'15","reported_seq":48,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.394296+0000","last_change":"2026-03-10T08:36:14.473434+0000","last_active":"2026-03-10T08:36:21.394296+0000","last_peered":"2026-03-10T08:36:21.394296+0000","last_clean":"2026-03-10T08:36:21.394296+0000","last_became_active":"2026-03-10T08:36:14.473202+0000","last_became_peered":"2026-03-10T08:36:14.473202+0000","last_unstale":"2026-03-10T08:36:21.394296+0000","last_undegraded":"2026-03-10T08:36:21.394296+0000","last_fullsized":"2026-03-10T08:36:21.394296+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_clean_scrub_stamp":"2026-03-10T08:36:13.382053+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:21:11.362393+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,7,6],"acting":[5,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.f","version":"47'2","reported_seq":39,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.399360+0000","last_change":"2026-03-10T08:36:12.408166+0000","last_active":"2026-03-10T08:36:21.399360+0000","last_peered":"2026-03-10T08:36:21.399360+0000","last_clean":"2026-03-10T08:36:21.399360+0000","last_became_active":"2026-03-10T08:36:12.408081+0000","last_became_peered":"2026-03-10T08:36:12.408081+0000","last_unstale":"2026-03-10T08:36:21.399360+0000","last_undegraded":"2026-03-10T08:36:21.399360+0000","last_fullsized":"2026-03-10T08:36:21.399360+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T09:52:25.669803+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":92,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":10,"num_read_kb":10,"num_write":4,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,0],"acting":[7,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.9","version":"54'8","reported_seq":29,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.399423+0000","last_change":"2026-03-10T08:36:16.438459+0000","last_active":"2026-03-10T08:36:21.399423+0000","last_peered":"2026-03-10T08:36:21.399423+0000","last_clean":"2026-03-10T08:36:21.399423+0000","last_became_active":"2026-03-10T08:36:16.432537+0000","last_became_peered":"2026-03-10T08:36:16.432537+0000","last_unstale":"2026-03-10T08:36:21.399423+0000","last_undegraded":"2026-03-10T08:36:21.399423+0000","last_fullsized":"2026-03-10T08:36:21.399423+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_clean_scrub_stamp":"2026-03-10T08:36:15.387844+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:04:18.646873+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,4],"acting":[7,6,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.a","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.394263+0000","last_change":"2026-03-10T08:36:18.440967+0000","last_active":"2026-03-10T08:36:21.394263+0000","last_peered":"2026-03-10T08:36:21.394263+0000","last_clean":"2026-03-10T08:36:21.394263+0000","last_became_active":"2026-03-10T08:36:18.438962+0000","last_became_peered":"2026-03-10T08:36:18.438962+0000","last_unstale":"2026-03-10T08:36:21.394263+0000","last_undegraded":"2026-03-10T08:36:21.394263+0000","last_fullsized":"2026-03-10T08:36:21.394263+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_clean_scrub_stamp":"2026-03-10T08:36:17.394330+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:55:00.104333+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,6,0],"acting":[5,6,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.10","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.452555+0000","last_change":"2026-03-10T08:36:12.414967+0000","last_active":"2026-03-10T08:36:21.452555+0000","last_peered":"2026-03-10T08:36:21.452555+0000","last_clean":"2026-03-10T08:36:21.452555+0000","last_became_active":"2026-03-10T08:36:12.414119+0000","last_became_peered":"2026-03-10T08:36:12.414119+0000","last_unstale":"2026-03-10T08:36:21.452555+0000","last_undegraded":"2026-03-10T08:36:21.452555+0000","last_fullsized":"2026-03-10T08:36:21.452555+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:22:33.807844+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,0,5],"acting":[6,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"4.17","version":"54'6","reported_seq":32,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.899015+0000","last_change":"2026-03-10T08:36:14.475483+0000","last_active":"2026-03-10T08:36:21.899015+0000","last_peered":"2026-03-10T08:36:21.899015+0000","last_clean":"2026-03-10T08:36:21.899015+0000","last_became_active":"2026-03-10T08:36:14.475341+0000","last_became_peered":"2026-03-10T08:36:14.475341+0000","last_unstale":"2026-03-10T08:36:21.899015+0000","last_undegraded":"2026-03-10T08:36:21.899015+0000","last_fullsized":"2026-03-10T08:36:21.899015+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_clean_scrub_stamp":"2026-03-10T08:36:13.382053+0000","objects_scrubbed":0,"log_size":6,"log_dups_size":0,"ondisk_log_size":6,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:14:03.228863+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":3,"num_object_clones":0,"num_object_copies":9,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":3,"num_whiteouts":0,"num_read":9,"num_read_kb":6,"num_write":6,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,1],"acting":[3,7,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.16","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.394007+0000","last_change":"2026-03-10T08:36:16.429063+0000","last_active":"2026-03-10T08:36:21.394007+0000","last_peered":"2026-03-10T08:36:21.394007+0000","last_clean":"2026-03-10T08:36:21.394007+0000","last_became_active":"2026-03-10T08:36:16.428447+0000","last_became_peered":"2026-03-10T08:36:16.428447+0000","last_unstale":"2026-03-10T08:36:21.394007+0000","last_undegraded":"2026-03-10T08:36:21.394007+0000","last_fullsized":"2026-03-10T08:36:21.394007+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_clean_scrub_stamp":"2026-03-10T08:36:15.387844+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:34:52.317817+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,1],"acting":[5,3,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.15","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.403208+0000","last_change":"2026-03-10T08:36:18.439956+0000","last_active":"2026-03-10T08:36:21.403208+0000","last_peered":"2026-03-10T08:36:21.403208+0000","last_clean":"2026-03-10T08:36:21.403208+0000","last_became_active":"2026-03-10T08:36:18.439812+0000","last_became_peered":"2026-03-10T08:36:18.439812+0000","last_unstale":"2026-03-10T08:36:21.403208+0000","last_undegraded":"2026-03-10T08:36:21.403208+0000","last_fullsized":"2026-03-10T08:36:21.403208+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_clean_scrub_stamp":"2026-03-10T08:36:17.394330+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:48:08.845489+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,4],"acting":[7,6,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"4.16","version":"54'9","reported_seq":39,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.880722+0000","last_change":"2026-03-10T08:36:14.474278+0000","last_active":"2026-03-10T08:36:21.880722+0000","last_peered":"2026-03-10T08:36:21.880722+0000","last_clean":"2026-03-10T08:36:21.880722+0000","last_became_active":"2026-03-10T08:36:14.474130+0000","last_became_peered":"2026-03-10T08:36:14.474130+0000","last_unstale":"2026-03-10T08:36:21.880722+0000","last_undegraded":"2026-03-10T08:36:21.880722+0000","last_fullsized":"2026-03-10T08:36:21.880722+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_clean_scrub_stamp":"2026-03-10T08:36:13.382053+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:42:45.302267+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,3,7],"acting":[0,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.11","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.399302+0000","last_change":"2026-03-10T08:36:12.407818+0000","last_active":"2026-03-10T08:36:21.399302+0000","last_peered":"2026-03-10T08:36:21.399302+0000","last_clean":"2026-03-10T08:36:21.399302+0000","last_became_active":"2026-03-10T08:36:12.407646+0000","last_became_peered":"2026-03-10T08:36:12.407646+0000","last_unstale":"2026-03-10T08:36:21.399302+0000","last_undegraded":"2026-03-10T08:36:21.399302+0000","last_fullsized":"2026-03-10T08:36:21.399302+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:40:01.985072+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,6],"acting":[7,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.17","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.899130+0000","last_change":"2026-03-10T08:36:16.431481+0000","last_active":"2026-03-10T08:36:21.899130+0000","last_peered":"2026-03-10T08:36:21.899130+0000","last_clean":"2026-03-10T08:36:21.899130+0000","last_became_active":"2026-03-10T08:36:16.431324+0000","last_became_peered":"2026-03-10T08:36:16.431324+0000","last_unstale":"2026-03-10T08:36:21.899130+0000","last_undegraded":"2026-03-10T08:36:21.899130+0000","last_fullsized":"2026-03-10T08:36:21.899130+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_clean_scrub_stamp":"2026-03-10T08:36:15.387844+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:05:07.198705+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,7],"acting":[3,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.14","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.884391+0000","last_change":"2026-03-10T08:36:18.450766+0000","last_active":"2026-03-10T08:36:21.884391+0000","last_peered":"2026-03-10T08:36:21.884391+0000","last_clean":"2026-03-10T08:36:21.884391+0000","last_became_active":"2026-03-10T08:36:18.450693+0000","last_became_peered":"2026-03-10T08:36:18.450693+0000","last_unstale":"2026-03-10T08:36:21.884391+0000","last_undegraded":"2026-03-10T08:36:21.884391+0000","last_fullsized":"2026-03-10T08:36:21.884391+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_clean_scrub_stamp":"2026-03-10T08:36:17.394330+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:21:17.287883+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,4,7],"acting":[2,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"4.15","version":"54'9","reported_seq":39,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.394357+0000","last_change":"2026-03-10T08:36:14.473502+0000","last_active":"2026-03-10T08:36:21.394357+0000","last_peered":"2026-03-10T08:36:21.394357+0000","last_clean":"2026-03-10T08:36:21.394357+0000","last_became_active":"2026-03-10T08:36:14.473352+0000","last_became_peered":"2026-03-10T08:36:14.473352+0000","last_unstale":"2026-03-10T08:36:21.394357+0000","last_undegraded":"2026-03-10T08:36:21.394357+0000","last_fullsized":"2026-03-10T08:36:21.394357+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_clean_scrub_stamp":"2026-03-10T08:36:13.382053+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:09:58.715484+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,7,3],"acting":[5,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.12","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.880179+0000","last_change":"2026-03-10T08:36:12.405014+0000","last_active":"2026-03-10T08:36:21.880179+0000","last_peered":"2026-03-10T08:36:21.880179+0000","last_clean":"2026-03-10T08:36:21.880179+0000","last_became_active":"2026-03-10T08:36:12.404112+0000","last_became_peered":"2026-03-10T08:36:12.404112+0000","last_unstale":"2026-03-10T08:36:21.880179+0000","last_undegraded":"2026-03-10T08:36:21.880179+0000","last_fullsized":"2026-03-10T08:36:21.880179+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:45:12.121417+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,3],"acting":[0,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.14","version":"54'8","reported_seq":29,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.886646+0000","last_change":"2026-03-10T08:36:16.431415+0000","last_active":"2026-03-10T08:36:21.886646+0000","last_peered":"2026-03-10T08:36:21.886646+0000","last_clean":"2026-03-10T08:36:21.886646+0000","last_became_active":"2026-03-10T08:36:16.431198+0000","last_became_peered":"2026-03-10T08:36:16.431198+0000","last_unstale":"2026-03-10T08:36:21.886646+0000","last_undegraded":"2026-03-10T08:36:21.886646+0000","last_fullsized":"2026-03-10T08:36:21.886646+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_clean_scrub_stamp":"2026-03-10T08:36:15.387844+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:21:44.511489+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,2],"acting":[3,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.17","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.404055+0000","last_change":"2026-03-10T08:36:18.446197+0000","last_active":"2026-03-10T08:36:21.404055+0000","last_peered":"2026-03-10T08:36:21.404055+0000","last_clean":"2026-03-10T08:36:21.404055+0000","last_became_active":"2026-03-10T08:36:18.446120+0000","last_became_peered":"2026-03-10T08:36:18.446120+0000","last_unstale":"2026-03-10T08:36:21.404055+0000","last_undegraded":"2026-03-10T08:36:21.404055+0000","last_fullsized":"2026-03-10T08:36:21.404055+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_clean_scrub_stamp":"2026-03-10T08:36:17.394330+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:46:13.704547+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,2,5],"acting":[4,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"4.14","version":"54'10","reported_seq":38,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.898972+0000","last_change":"2026-03-10T08:36:14.474243+0000","last_active":"2026-03-10T08:36:21.898972+0000","last_peered":"2026-03-10T08:36:21.898972+0000","last_clean":"2026-03-10T08:36:21.898972+0000","last_became_active":"2026-03-10T08:36:14.474103+0000","last_became_peered":"2026-03-10T08:36:14.474103+0000","last_unstale":"2026-03-10T08:36:21.898972+0000","last_undegraded":"2026-03-10T08:36:21.898972+0000","last_fullsized":"2026-03-10T08:36:21.898972+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_clean_scrub_stamp":"2026-03-10T08:36:13.382053+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:16:22.013377+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,7],"acting":[3,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.13","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.399208+0000","last_change":"2026-03-10T08:36:12.400550+0000","last_active":"2026-03-10T08:36:21.399208+0000","last_peered":"2026-03-10T08:36:21.399208+0000","last_clean":"2026-03-10T08:36:21.399208+0000","last_became_active":"2026-03-10T08:36:12.400206+0000","last_became_peered":"2026-03-10T08:36:12.400206+0000","last_unstale":"2026-03-10T08:36:21.399208+0000","last_undegraded":"2026-03-10T08:36:21.399208+0000","last_fullsized":"2026-03-10T08:36:21.399208+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:52:05.723594+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,2],"acting":[7,4,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.15","version":"54'8","reported_seq":29,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.394508+0000","last_change":"2026-03-10T08:36:16.417048+0000","last_active":"2026-03-10T08:36:21.394508+0000","last_peered":"2026-03-10T08:36:21.394508+0000","last_clean":"2026-03-10T08:36:21.394508+0000","last_became_active":"2026-03-10T08:36:16.416864+0000","last_became_peered":"2026-03-10T08:36:16.416864+0000","last_unstale":"2026-03-10T08:36:21.394508+0000","last_undegraded":"2026-03-10T08:36:21.394508+0000","last_fullsized":"2026-03-10T08:36:21.394508+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_clean_scrub_stamp":"2026-03-10T08:36:15.387844+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:20:41.922881+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.16","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.879959+0000","last_change":"2026-03-10T08:36:18.451235+0000","last_active":"2026-03-10T08:36:21.879959+0000","last_peered":"2026-03-10T08:36:21.879959+0000","last_clean":"2026-03-10T08:36:21.879959+0000","last_became_active":"2026-03-10T08:36:18.451125+0000","last_became_peered":"2026-03-10T08:36:18.451125+0000","last_unstale":"2026-03-10T08:36:21.879959+0000","last_undegraded":"2026-03-10T08:36:21.879959+0000","last_fullsized":"2026-03-10T08:36:21.879959+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_clean_scrub_stamp":"2026-03-10T08:36:17.394330+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T09:15:03.190145+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,3],"acting":[0,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"4.13","version":"54'11","reported_seq":42,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.404419+0000","last_change":"2026-03-10T08:36:14.413718+0000","last_active":"2026-03-10T08:36:21.404419+0000","last_peered":"2026-03-10T08:36:21.404419+0000","last_clean":"2026-03-10T08:36:21.404419+0000","last_became_active":"2026-03-10T08:36:14.412454+0000","last_became_peered":"2026-03-10T08:36:14.412454+0000","last_unstale":"2026-03-10T08:36:21.404419+0000","last_undegraded":"2026-03-10T08:36:21.404419+0000","last_fullsized":"2026-03-10T08:36:21.404419+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_clean_scrub_stamp":"2026-03-10T08:36:13.382053+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:20:31.986118+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,1],"acting":[4,6,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.14","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.404396+0000","last_change":"2026-03-10T08:36:12.411048+0000","last_active":"2026-03-10T08:36:21.404396+0000","last_peered":"2026-03-10T08:36:21.404396+0000","last_clean":"2026-03-10T08:36:21.404396+0000","last_became_active":"2026-03-10T08:36:12.410963+0000","last_became_peered":"2026-03-10T08:36:12.410963+0000","last_unstale":"2026-03-10T08:36:21.404396+0000","last_undegraded":"2026-03-10T08:36:21.404396+0000","last_fullsized":"2026-03-10T08:36:21.404396+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T09:02:42.488859+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,7,6],"acting":[4,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.12","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.892845+0000","last_change":"2026-03-10T08:36:16.416965+0000","last_active":"2026-03-10T08:36:21.892845+0000","last_peered":"2026-03-10T08:36:21.892845+0000","last_clean":"2026-03-10T08:36:21.892845+0000","last_became_active":"2026-03-10T08:36:16.416880+0000","last_became_peered":"2026-03-10T08:36:16.416880+0000","last_unstale":"2026-03-10T08:36:21.892845+0000","last_undegraded":"2026-03-10T08:36:21.892845+0000","last_fullsized":"2026-03-10T08:36:21.892845+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_clean_scrub_stamp":"2026-03-10T08:36:15.387844+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:50:49.036177+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,3],"acting":[1,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.11","version":"54'1","reported_seq":16,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.886016+0000","last_change":"2026-03-10T08:36:18.428754+0000","last_active":"2026-03-10T08:36:21.886016+0000","last_peered":"2026-03-10T08:36:21.886016+0000","last_clean":"2026-03-10T08:36:21.886016+0000","last_became_active":"2026-03-10T08:36:18.428686+0000","last_became_peered":"2026-03-10T08:36:18.428686+0000","last_unstale":"2026-03-10T08:36:21.886016+0000","last_undegraded":"2026-03-10T08:36:21.886016+0000","last_fullsized":"2026-03-10T08:36:21.886016+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_clean_scrub_stamp":"2026-03-10T08:36:17.394330+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:31:01.768661+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":13,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,5],"acting":[3,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"4.12","version":"54'9","reported_seq":39,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.892147+0000","last_change":"2026-03-10T08:36:14.411554+0000","last_active":"2026-03-10T08:36:21.892147+0000","last_peered":"2026-03-10T08:36:21.892147+0000","last_clean":"2026-03-10T08:36:21.892147+0000","last_became_active":"2026-03-10T08:36:14.411179+0000","last_became_peered":"2026-03-10T08:36:14.411179+0000","last_unstale":"2026-03-10T08:36:21.892147+0000","last_undegraded":"2026-03-10T08:36:21.892147+0000","last_fullsized":"2026-03-10T08:36:21.892147+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_clean_scrub_stamp":"2026-03-10T08:36:13.382053+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:15:58.327178+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,6,2],"acting":[1,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.15","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.399152+0000","last_change":"2026-03-10T08:36:12.407283+0000","last_active":"2026-03-10T08:36:21.399152+0000","last_peered":"2026-03-10T08:36:21.399152+0000","last_clean":"2026-03-10T08:36:21.399152+0000","last_became_active":"2026-03-10T08:36:12.401220+0000","last_became_peered":"2026-03-10T08:36:12.401220+0000","last_unstale":"2026-03-10T08:36:21.399152+0000","last_undegraded":"2026-03-10T08:36:21.399152+0000","last_fullsized":"2026-03-10T08:36:21.399152+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:05:13.110333+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,3,4],"acting":[7,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.13","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.886367+0000","last_change":"2026-03-10T08:36:16.420867+0000","last_active":"2026-03-10T08:36:21.886367+0000","last_peered":"2026-03-10T08:36:21.886367+0000","last_clean":"2026-03-10T08:36:21.886367+0000","last_became_active":"2026-03-10T08:36:16.420767+0000","last_became_peered":"2026-03-10T08:36:16.420767+0000","last_unstale":"2026-03-10T08:36:21.886367+0000","last_undegraded":"2026-03-10T08:36:21.886367+0000","last_fullsized":"2026-03-10T08:36:21.886367+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_clean_scrub_stamp":"2026-03-10T08:36:15.387844+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:39:56.108735+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,1],"acting":[3,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.10","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.879888+0000","last_change":"2026-03-10T08:36:18.420018+0000","last_active":"2026-03-10T08:36:21.879888+0000","last_peered":"2026-03-10T08:36:21.879888+0000","last_clean":"2026-03-10T08:36:21.879888+0000","last_became_active":"2026-03-10T08:36:18.419939+0000","last_became_peered":"2026-03-10T08:36:18.419939+0000","last_unstale":"2026-03-10T08:36:21.879888+0000","last_undegraded":"2026-03-10T08:36:21.879888+0000","last_fullsized":"2026-03-10T08:36:21.879888+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_clean_scrub_stamp":"2026-03-10T08:36:17.394330+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:37:12.387445+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,1],"acting":[0,5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"4.11","version":"54'11","reported_seq":42,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.898890+0000","last_change":"2026-03-10T08:36:14.474655+0000","last_active":"2026-03-10T08:36:21.898890+0000","last_peered":"2026-03-10T08:36:21.898890+0000","last_clean":"2026-03-10T08:36:21.898890+0000","last_became_active":"2026-03-10T08:36:14.474518+0000","last_became_peered":"2026-03-10T08:36:14.474518+0000","last_unstale":"2026-03-10T08:36:21.898890+0000","last_undegraded":"2026-03-10T08:36:21.898890+0000","last_fullsized":"2026-03-10T08:36:21.898890+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_clean_scrub_stamp":"2026-03-10T08:36:13.382053+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:48:07.890078+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,6],"acting":[3,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.16","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.393856+0000","last_change":"2026-03-10T08:36:12.397950+0000","last_active":"2026-03-10T08:36:21.393856+0000","last_peered":"2026-03-10T08:36:21.393856+0000","last_clean":"2026-03-10T08:36:21.393856+0000","last_became_active":"2026-03-10T08:36:12.397528+0000","last_became_peered":"2026-03-10T08:36:12.397528+0000","last_unstale":"2026-03-10T08:36:21.393856+0000","last_undegraded":"2026-03-10T08:36:21.393856+0000","last_fullsized":"2026-03-10T08:36:21.393856+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:49:35.407002+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,7,1],"acting":[5,7,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.10","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.402867+0000","last_change":"2026-03-10T08:36:16.432068+0000","last_active":"2026-03-10T08:36:21.402867+0000","last_peered":"2026-03-10T08:36:21.402867+0000","last_clean":"2026-03-10T08:36:21.402867+0000","last_became_active":"2026-03-10T08:36:16.431964+0000","last_became_peered":"2026-03-10T08:36:16.431964+0000","last_unstale":"2026-03-10T08:36:21.402867+0000","last_undegraded":"2026-03-10T08:36:21.402867+0000","last_fullsized":"2026-03-10T08:36:21.402867+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_clean_scrub_stamp":"2026-03-10T08:36:15.387844+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T09:07:17.951369+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,6],"acting":[7,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.13","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.886875+0000","last_change":"2026-03-10T08:36:18.426029+0000","last_active":"2026-03-10T08:36:21.886875+0000","last_peered":"2026-03-10T08:36:21.886875+0000","last_clean":"2026-03-10T08:36:21.886875+0000","last_became_active":"2026-03-10T08:36:18.425926+0000","last_became_peered":"2026-03-10T08:36:18.425926+0000","last_unstale":"2026-03-10T08:36:21.886875+0000","last_undegraded":"2026-03-10T08:36:21.886875+0000","last_fullsized":"2026-03-10T08:36:21.886875+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_clean_scrub_stamp":"2026-03-10T08:36:17.394330+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:53:20.882581+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,6],"acting":[3,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"4.10","version":"54'4","reported_seq":29,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.886280+0000","last_change":"2026-03-10T08:36:14.413500+0000","last_active":"2026-03-10T08:36:21.886280+0000","last_peered":"2026-03-10T08:36:21.886280+0000","last_clean":"2026-03-10T08:36:21.886280+0000","last_became_active":"2026-03-10T08:36:14.413351+0000","last_became_peered":"2026-03-10T08:36:14.413351+0000","last_unstale":"2026-03-10T08:36:21.886280+0000","last_undegraded":"2026-03-10T08:36:21.886280+0000","last_fullsized":"2026-03-10T08:36:21.886280+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_clean_scrub_stamp":"2026-03-10T08:36:13.382053+0000","objects_scrubbed":0,"log_size":4,"log_dups_size":0,"ondisk_log_size":4,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:18:12.192762+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":6,"num_read_kb":4,"num_write":4,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,6],"acting":[3,1,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.17","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.880155+0000","last_change":"2026-03-10T08:36:12.410694+0000","last_active":"2026-03-10T08:36:21.880155+0000","last_peered":"2026-03-10T08:36:21.880155+0000","last_clean":"2026-03-10T08:36:21.880155+0000","last_became_active":"2026-03-10T08:36:12.404724+0000","last_became_peered":"2026-03-10T08:36:12.404724+0000","last_unstale":"2026-03-10T08:36:21.880155+0000","last_undegraded":"2026-03-10T08:36:21.880155+0000","last_fullsized":"2026-03-10T08:36:21.880155+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:29:34.907116+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,3],"acting":[0,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.11","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.452987+0000","last_change":"2026-03-10T08:36:16.433406+0000","last_active":"2026-03-10T08:36:21.452987+0000","last_peered":"2026-03-10T08:36:21.452987+0000","last_clean":"2026-03-10T08:36:21.452987+0000","last_became_active":"2026-03-10T08:36:16.433216+0000","last_became_peered":"2026-03-10T08:36:16.433216+0000","last_unstale":"2026-03-10T08:36:21.452987+0000","last_undegraded":"2026-03-10T08:36:21.452987+0000","last_fullsized":"2026-03-10T08:36:21.452987+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_clean_scrub_stamp":"2026-03-10T08:36:15.387844+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:05:50.448617+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"6.12","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.398984+0000","last_change":"2026-03-10T08:36:18.439895+0000","last_active":"2026-03-10T08:36:21.398984+0000","last_peered":"2026-03-10T08:36:21.398984+0000","last_clean":"2026-03-10T08:36:21.398984+0000","last_became_active":"2026-03-10T08:36:18.439622+0000","last_became_peered":"2026-03-10T08:36:18.439622+0000","last_unstale":"2026-03-10T08:36:21.398984+0000","last_undegraded":"2026-03-10T08:36:21.398984+0000","last_fullsized":"2026-03-10T08:36:21.398984+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_clean_scrub_stamp":"2026-03-10T08:36:17.394330+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:01:40.457227+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,4],"acting":[7,2,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.1d","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.892248+0000","last_change":"2026-03-10T08:36:18.434413+0000","last_active":"2026-03-10T08:36:21.892248+0000","last_peered":"2026-03-10T08:36:21.892248+0000","last_clean":"2026-03-10T08:36:21.892248+0000","last_became_active":"2026-03-10T08:36:18.434322+0000","last_became_peered":"2026-03-10T08:36:18.434322+0000","last_unstale":"2026-03-10T08:36:21.892248+0000","last_undegraded":"2026-03-10T08:36:21.892248+0000","last_fullsized":"2026-03-10T08:36:21.892248+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:17.394330+0000","last_clean_scrub_stamp":"2026-03-10T08:36:17.394330+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:31:50.758084+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,4],"acting":[1,5,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.18","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.886792+0000","last_change":"2026-03-10T08:36:12.424692+0000","last_active":"2026-03-10T08:36:21.886792+0000","last_peered":"2026-03-10T08:36:21.886792+0000","last_clean":"2026-03-10T08:36:21.886792+0000","last_became_active":"2026-03-10T08:36:12.424496+0000","last_became_peered":"2026-03-10T08:36:12.424496+0000","last_unstale":"2026-03-10T08:36:21.886792+0000","last_undegraded":"2026-03-10T08:36:21.886792+0000","last_fullsized":"2026-03-10T08:36:21.886792+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:11.374234+0000","last_clean_scrub_stamp":"2026-03-10T08:36:11.374234+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:01:34.197675+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,1],"acting":[3,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"4.1f","version":"54'11","reported_seq":42,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.452710+0000","last_change":"2026-03-10T08:36:14.410903+0000","last_active":"2026-03-10T08:36:21.452710+0000","last_peered":"2026-03-10T08:36:21.452710+0000","last_clean":"2026-03-10T08:36:21.452710+0000","last_became_active":"2026-03-10T08:36:14.410711+0000","last_became_peered":"2026-03-10T08:36:14.410711+0000","last_unstale":"2026-03-10T08:36:21.452710+0000","last_undegraded":"2026-03-10T08:36:21.452710+0000","last_fullsized":"2026-03-10T08:36:21.452710+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:13.382053+0000","last_clean_scrub_stamp":"2026-03-10T08:36:13.382053+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T09:29:39.210411+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,5,1],"acting":[6,5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"5.1e","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-10T08:36:21.880583+0000","last_change":"2026-03-10T08:36:16.431631+0000","last_active":"2026-03-10T08:36:21.880583+0000","last_peered":"2026-03-10T08:36:21.880583+0000","last_clean":"2026-03-10T08:36:21.880583+0000","last_became_active":"2026-03-10T08:36:16.431551+0000","last_became_peered":"2026-03-10T08:36:16.431551+0000","last_unstale":"2026-03-10T08:36:21.880583+0000","last_undegraded":"2026-03-10T08:36:21.880583+0000","last_fullsized":"2026-03-10T08:36:21.880583+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T08:36:15.387844+0000","last_clean_scrub_stamp":"2026-03-10T08:36:15.387844+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:42:09.312170+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,2],"acting":[0,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]}],"pool_stats":[{"poolid":6,"num_pg":32,"stat_sum":{"num_bytes":416,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":3,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":24576,"data_stored":1248,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":2,"ondisk_log_size":2,"up":96,"acting":96,"num_store_stats":8},{"poolid":5,"num_pg":32,"stat_sum":{"num_bytes":0,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":64,"ondisk_log_size":64,"up":96,"acting":96,"num_store_stats":8},{"poolid":4,"num_pg":32,"stat_sum":{"num_bytes":3702,"num_objects":178,"num_object_clones":0,"num_object_copies":534,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":178,"num_whiteouts":0,"num_read":698,"num_read_kb":455,"num_write":417,"num_write_kb":34,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":417792,"data_stored":11106,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":393,"ondisk_log_size":393,"up":96,"acting":96,"num_store_stats":8},{"poolid":3,"num_pg":32,"stat_sum":{"num_bytes":1613,"num_objects":6,"num_object_clones":0,"num_object_copies":18,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":6,"num_whiteouts":0,"num_read":24,"num_read_kb":24,"num_write":10,"num_write_kb":6,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":73728,"data_stored":4839,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":6,"ondisk_log_size":6,"up":96,"acting":96,"num_store_stats":8},{"poolid":2,"num_pg":3,"stat_sum":{"num_bytes":408,"num_objects":3,"num_object_clones":0,"num_object_copies":9,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":3,"num_whiteouts":0,"num_read":6,"num_read_kb":1,"num_write":6,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":24576,"data_stored":1224,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":8,"ondisk_log_size":8,"up":9,"acting":9,"num_store_stats":7},{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":2314240,"data_stored":2296400,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":7}],"osd_stats":[{"osd":7,"up_from":43,"seq":184683593732,"num_pgs":53,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27836,"kb_used_data":1000,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939588,"statfs":{"total":21470642176,"available":21442138112,"internally_reserved":0,"allocated":1024000,"data_stored":672045,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1585,"internal_metadata":27457999},"hb_peers":[0,1,2,3,4,5,6],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":6,"up_from":38,"seq":163208757255,"num_pgs":43,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27820,"kb_used_data":980,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939604,"statfs":{"total":21470642176,"available":21442154496,"internally_reserved":0,"allocated":1003520,"data_stored":670960,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":27457995},"hb_peers":[0,1,2,3,4,5,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":5,"up_from":33,"seq":141733920777,"num_pgs":33,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27380,"kb_used_data":540,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940044,"statfs":{"total":21470642176,"available":21442605056,"internally_reserved":0,"allocated":552960,"data_stored":212488,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,4,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":1,"apply_latency_ms":1,"commit_latency_ns":1000000,"apply_latency_ns":1000000},"alerts":[]},{"osd":4,"up_from":28,"seq":120259084299,"num_pgs":51,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27380,"kb_used_data":548,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940044,"statfs":{"total":21470642176,"available":21442605056,"internally_reserved":0,"allocated":561152,"data_stored":207128,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":7,"apply_latency_ms":7,"commit_latency_ns":7000000,"apply_latency_ns":7000000},"alerts":[]},{"osd":3,"up_from":23,"seq":98784247821,"num_pgs":56,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27396,"kb_used_data":560,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940028,"statfs":{"total":21470642176,"available":21442588672,"internally_reserved":0,"allocated":573440,"data_stored":207427,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1588,"internal_metadata":27457996},"hb_peers":[0,1,2,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":9,"apply_latency_ms":9,"commit_latency_ns":9000000,"apply_latency_ns":9000000},"alerts":[]},{"osd":2,"up_from":16,"seq":68719476752,"num_pgs":36,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27364,"kb_used_data":528,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940060,"statfs":{"total":21470642176,"available":21442621440,"internally_reserved":0,"allocated":540672,"data_stored":212264,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":1,"up_from":12,"seq":51539607570,"num_pgs":57,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27436,"kb_used_data":600,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939988,"statfs":{"total":21470642176,"available":21442547712,"internally_reserved":0,"allocated":614400,"data_stored":214894,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":8,"apply_latency_ms":8,"commit_latency_ns":8000000,"apply_latency_ns":8000000},"alerts":[]},{"osd":0,"up_from":8,"seq":34359738388,"num_pgs":46,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27844,"kb_used_data":1008,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939580,"statfs":{"total":21470642176,"available":21442129920,"internally_reserved":0,"allocated":1032192,"data_stored":671767,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[1,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":1,"apply_latency_ms":1,"commit_latency_ns":1000000,"apply_latency_ns":1000000},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":408,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":12288,"data_stored":138,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":16384,"data_stored":1521,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":436,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":1039,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":20480,"data_stored":1177,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":436,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":92,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":49152,"data_stored":1320,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":90112,"data_stored":2338,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":32768,"data_stored":798,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":73728,"data_stored":1898,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":53248,"data_stored":1474,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":36864,"data_stored":990,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":36864,"data_stored":1034,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":45056,"data_stored":1254,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":13,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":403,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":13,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":416,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":403,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-10T08:36:25.743 INFO:tasks.cephadm.ceph_manager.ceph:clean! 2026-03-10T08:36:25.743 INFO:tasks.ceph:Waiting until ceph cluster ceph is healthy... 2026-03-10T08:36:25.743 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy 2026-03-10T08:36:25.744 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- ceph health --format=json 2026-03-10T08:36:25.962 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.a/config 2026-03-10T08:36:26.213 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:36:26.213 INFO:teuthology.orchestra.run.vm03.stdout:{"status":"HEALTH_OK","checks":{},"mutes":[]} 2026-03-10T08:36:26.277 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:26 vm06 systemd[1]: Starting Ceph prometheus.a for aaf0329a-1c5b-11f1-8b6f-7f2d819bb543... 2026-03-10T08:36:26.320 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy done 2026-03-10T08:36:26.320 INFO:tasks.cephadm:Setup complete, yielding 2026-03-10T08:36:26.320 INFO:teuthology.run_tasks:Running task workunit... 2026-03-10T08:36:26.325 INFO:tasks.workunit:Pulling workunits from ref 75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b 2026-03-10T08:36:26.325 INFO:tasks.workunit:Making a separate scratch dir for every client... 2026-03-10T08:36:26.326 DEBUG:teuthology.orchestra.run.vm03:> stat -- /home/ubuntu/cephtest/mnt.0 2026-03-10T08:36:26.347 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T08:36:26.348 INFO:teuthology.orchestra.run.vm03.stderr:stat: cannot statx '/home/ubuntu/cephtest/mnt.0': No such file or directory 2026-03-10T08:36:26.348 DEBUG:teuthology.orchestra.run.vm03:> mkdir -- /home/ubuntu/cephtest/mnt.0 2026-03-10T08:36:26.409 INFO:tasks.workunit:Created dir /home/ubuntu/cephtest/mnt.0 2026-03-10T08:36:26.409 DEBUG:teuthology.orchestra.run.vm03:> cd -- /home/ubuntu/cephtest/mnt.0 && mkdir -- client.0 2026-03-10T08:36:26.469 INFO:tasks.workunit:timeout=1h 2026-03-10T08:36:26.469 INFO:tasks.workunit:cleanup=True 2026-03-10T08:36:26.469 DEBUG:teuthology.orchestra.run.vm03:> rm -rf /home/ubuntu/cephtest/clone.client.0 && git clone https://github.com/kshtsk/ceph.git /home/ubuntu/cephtest/clone.client.0 && cd /home/ubuntu/cephtest/clone.client.0 && git checkout 75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b 2026-03-10T08:36:26.524 INFO:tasks.workunit.client.0.vm03.stderr:Cloning into '/home/ubuntu/cephtest/clone.client.0'... 2026-03-10T08:36:26.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:26 vm06 ceph-mon[54477]: from='client.14688 v1:192.168.123.103:0/3057766080' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T08:36:26.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:26 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/2686303661' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T08:36:26.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:26 vm06 ceph-mon[54477]: pgmap v113: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 59 KiB/s rd, 4.5 KiB/s wr, 143 op/s 2026-03-10T08:36:26.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:26 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:26.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:26 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:26.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:26 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:26.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:26 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-10T08:36:26.590 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:26 vm06 podman[79049]: 2026-03-10 08:36:26.276188542 +0000 UTC m=+0.062306856 container create a45adbe4d96be880ddb67e2d677c9d93e1dbd627b4e221f8ac28fd9f824439e2 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a, maintainer=The Prometheus Authors ) 2026-03-10T08:36:26.590 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:26 vm06 podman[79049]: 2026-03-10 08:36:26.306839965 +0000 UTC m=+0.092958289 container init a45adbe4d96be880ddb67e2d677c9d93e1dbd627b4e221f8ac28fd9f824439e2 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a, maintainer=The Prometheus Authors ) 2026-03-10T08:36:26.590 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:26 vm06 podman[79049]: 2026-03-10 08:36:26.309677186 +0000 UTC m=+0.095795500 container start a45adbe4d96be880ddb67e2d677c9d93e1dbd627b4e221f8ac28fd9f824439e2 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a, maintainer=The Prometheus Authors ) 2026-03-10T08:36:26.590 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:26 vm06 bash[79049]: a45adbe4d96be880ddb67e2d677c9d93e1dbd627b4e221f8ac28fd9f824439e2 2026-03-10T08:36:26.590 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:26 vm06 podman[79049]: 2026-03-10 08:36:26.227322346 +0000 UTC m=+0.013440670 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0 2026-03-10T08:36:26.590 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:26 vm06 systemd[1]: Started Ceph prometheus.a for aaf0329a-1c5b-11f1-8b6f-7f2d819bb543. 2026-03-10T08:36:26.590 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:26 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a[79059]: ts=2026-03-10T08:36:26.339Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.0, branch=HEAD, revision=c05c15512acb675e3f6cd662a6727854e93fc024)" 2026-03-10T08:36:26.590 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:26 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a[79059]: ts=2026-03-10T08:36:26.339Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@b5723e458358, date=20240319-10:54:45, tags=netgo,builtinassets,stringlabels)" 2026-03-10T08:36:26.590 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:26 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a[79059]: ts=2026-03-10T08:36:26.339Z caller=main.go:623 level=info host_details="(Linux 5.14.0-686.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Feb 19 10:49:27 UTC 2026 x86_64 vm06 (none))" 2026-03-10T08:36:26.590 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:26 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a[79059]: ts=2026-03-10T08:36:26.339Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)" 2026-03-10T08:36:26.590 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:26 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a[79059]: ts=2026-03-10T08:36:26.339Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)" 2026-03-10T08:36:26.590 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:26 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a[79059]: ts=2026-03-10T08:36:26.341Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=:9095 2026-03-10T08:36:26.590 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:26 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a[79059]: ts=2026-03-10T08:36:26.341Z caller=main.go:1129 level=info msg="Starting TSDB ..." 2026-03-10T08:36:26.590 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:26 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a[79059]: ts=2026-03-10T08:36:26.344Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9095 2026-03-10T08:36:26.590 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:26 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a[79059]: ts=2026-03-10T08:36:26.344Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9095 2026-03-10T08:36:26.590 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:26 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a[79059]: ts=2026-03-10T08:36:26.345Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 2026-03-10T08:36:26.590 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:26 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a[79059]: ts=2026-03-10T08:36:26.345Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=1.173µs 2026-03-10T08:36:26.590 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:26 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a[79059]: ts=2026-03-10T08:36:26.345Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while" 2026-03-10T08:36:26.590 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:26 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a[79059]: ts=2026-03-10T08:36:26.345Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 2026-03-10T08:36:26.590 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:26 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a[79059]: ts=2026-03-10T08:36:26.345Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=14.668µs wal_replay_duration=79.559µs wbl_replay_duration=120ns total_replay_duration=105.848µs 2026-03-10T08:36:26.590 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:26 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a[79059]: ts=2026-03-10T08:36:26.346Z caller=main.go:1150 level=info fs_type=XFS_SUPER_MAGIC 2026-03-10T08:36:26.590 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:26 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a[79059]: ts=2026-03-10T08:36:26.346Z caller=main.go:1153 level=info msg="TSDB started" 2026-03-10T08:36:26.590 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:26 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a[79059]: ts=2026-03-10T08:36:26.346Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 2026-03-10T08:36:26.590 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:26 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a[79059]: ts=2026-03-10T08:36:26.362Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=16.826926ms db_storage=801ns remote_storage=761ns web_handler=109ns query_engine=350ns scrape=616.794µs scrape_sd=113.522µs notify=371ns notify_sd=410ns rules=15.917344ms tracing=4.568µs 2026-03-10T08:36:26.590 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:26 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a[79059]: ts=2026-03-10T08:36:26.363Z caller=main.go:1114 level=info msg="Server is ready to receive web requests." 2026-03-10T08:36:26.590 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:26 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a[79059]: ts=2026-03-10T08:36:26.363Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..." 2026-03-10T08:36:26.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:26 vm03 ceph-mon[50703]: from='client.14688 v1:192.168.123.103:0/3057766080' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T08:36:26.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:26 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/2686303661' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T08:36:26.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:26 vm03 ceph-mon[50703]: pgmap v113: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 59 KiB/s rd, 4.5 KiB/s wr, 143 op/s 2026-03-10T08:36:26.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:26 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:26.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:26 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:26.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:26 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:26.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:26 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-10T08:36:26.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:26 vm03 ceph-mon[57160]: from='client.14688 v1:192.168.123.103:0/3057766080' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T08:36:26.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:26 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/2686303661' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T08:36:26.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:26 vm03 ceph-mon[57160]: pgmap v113: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 59 KiB/s rd, 4.5 KiB/s wr, 143 op/s 2026-03-10T08:36:26.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:26 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:26.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:26 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:26.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:26 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' 2026-03-10T08:36:26.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:26 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-10T08:36:27.678 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:36:27 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: ignoring --setuser ceph since I am not root 2026-03-10T08:36:27.678 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:36:27 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: ignoring --setgroup ceph since I am not root 2026-03-10T08:36:27.678 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:36:27 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:36:27.496+0000 7fd2875f8140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T08:36:27.678 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:36:27 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:36:27.540+0000 7fd2875f8140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T08:36:27.839 INFO:journalctl@ceph.mgr.x.vm06.stdout:Mar 10 08:36:27 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-x[56265]: ignoring --setuser ceph since I am not root 2026-03-10T08:36:27.839 INFO:journalctl@ceph.mgr.x.vm06.stdout:Mar 10 08:36:27 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-x[56265]: ignoring --setgroup ceph since I am not root 2026-03-10T08:36:27.839 INFO:journalctl@ceph.mgr.x.vm06.stdout:Mar 10 08:36:27 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-x[56265]: 2026-03-10T08:36:27.497+0000 7fe5027c8140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T08:36:27.839 INFO:journalctl@ceph.mgr.x.vm06.stdout:Mar 10 08:36:27 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-x[56265]: 2026-03-10T08:36:27.542+0000 7fe5027c8140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T08:36:28.339 INFO:journalctl@ceph.mgr.x.vm06.stdout:Mar 10 08:36:27 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-x[56265]: 2026-03-10T08:36:27.983+0000 7fe5027c8140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T08:36:28.342 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:36:27 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:36:27.989+0000 7fd2875f8140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T08:36:28.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:28 vm03 ceph-mon[57160]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-10T08:36:28.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:28 vm03 ceph-mon[57160]: mgrmap e17: y(active, since 2m), standbys: x 2026-03-10T08:36:28.678 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:36:28 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:36:28.340+0000 7fd2875f8140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T08:36:28.678 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:36:28 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T08:36:28.678 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:36:28 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T08:36:28.678 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:36:28 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: from numpy import show_config as show_numpy_config 2026-03-10T08:36:28.678 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:36:28 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:36:28.433+0000 7fd2875f8140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T08:36:28.678 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:36:28 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:36:28.470+0000 7fd2875f8140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T08:36:28.678 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:36:28 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:36:28.548+0000 7fd2875f8140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T08:36:28.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:28 vm03 ceph-mon[50703]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-10T08:36:28.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:28 vm03 ceph-mon[50703]: mgrmap e17: y(active, since 2m), standbys: x 2026-03-10T08:36:28.839 INFO:journalctl@ceph.mgr.x.vm06.stdout:Mar 10 08:36:28 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-x[56265]: 2026-03-10T08:36:28.340+0000 7fe5027c8140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T08:36:28.839 INFO:journalctl@ceph.mgr.x.vm06.stdout:Mar 10 08:36:28 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-x[56265]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T08:36:28.839 INFO:journalctl@ceph.mgr.x.vm06.stdout:Mar 10 08:36:28 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-x[56265]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T08:36:28.839 INFO:journalctl@ceph.mgr.x.vm06.stdout:Mar 10 08:36:28 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-x[56265]: from numpy import show_config as show_numpy_config 2026-03-10T08:36:28.839 INFO:journalctl@ceph.mgr.x.vm06.stdout:Mar 10 08:36:28 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-x[56265]: 2026-03-10T08:36:28.447+0000 7fe5027c8140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T08:36:28.839 INFO:journalctl@ceph.mgr.x.vm06.stdout:Mar 10 08:36:28 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-x[56265]: 2026-03-10T08:36:28.487+0000 7fe5027c8140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T08:36:28.839 INFO:journalctl@ceph.mgr.x.vm06.stdout:Mar 10 08:36:28 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-x[56265]: 2026-03-10T08:36:28.564+0000 7fe5027c8140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T08:36:28.840 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:28 vm06 ceph-mon[54477]: from='mgr.14150 v1:192.168.123.103:0/1905615783' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-10T08:36:28.840 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:28 vm06 ceph-mon[54477]: mgrmap e17: y(active, since 2m), standbys: x 2026-03-10T08:36:29.383 INFO:journalctl@ceph.mgr.x.vm06.stdout:Mar 10 08:36:29 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-x[56265]: 2026-03-10T08:36:29.102+0000 7fe5027c8140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T08:36:29.383 INFO:journalctl@ceph.mgr.x.vm06.stdout:Mar 10 08:36:29 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-x[56265]: 2026-03-10T08:36:29.220+0000 7fe5027c8140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T08:36:29.383 INFO:journalctl@ceph.mgr.x.vm06.stdout:Mar 10 08:36:29 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-x[56265]: 2026-03-10T08:36:29.264+0000 7fe5027c8140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T08:36:29.383 INFO:journalctl@ceph.mgr.x.vm06.stdout:Mar 10 08:36:29 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-x[56265]: 2026-03-10T08:36:29.301+0000 7fe5027c8140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T08:36:29.383 INFO:journalctl@ceph.mgr.x.vm06.stdout:Mar 10 08:36:29 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-x[56265]: 2026-03-10T08:36:29.344+0000 7fe5027c8140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T08:36:29.428 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:36:29 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:36:29.167+0000 7fd2875f8140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T08:36:29.428 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:36:29 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:36:29.336+0000 7fd2875f8140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T08:36:29.428 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:36:29 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:36:29.388+0000 7fd2875f8140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T08:36:29.768 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:36:29 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:36:29.436+0000 7fd2875f8140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T08:36:29.768 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:36:29 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:36:29.492+0000 7fd2875f8140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T08:36:29.768 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:36:29 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:36:29.539+0000 7fd2875f8140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T08:36:29.768 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:36:29 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:36:29.766+0000 7fd2875f8140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T08:36:29.839 INFO:journalctl@ceph.mgr.x.vm06.stdout:Mar 10 08:36:29 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-x[56265]: 2026-03-10T08:36:29.382+0000 7fe5027c8140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T08:36:29.839 INFO:journalctl@ceph.mgr.x.vm06.stdout:Mar 10 08:36:29 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-x[56265]: 2026-03-10T08:36:29.558+0000 7fe5027c8140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T08:36:29.839 INFO:journalctl@ceph.mgr.x.vm06.stdout:Mar 10 08:36:29 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-x[56265]: 2026-03-10T08:36:29.612+0000 7fe5027c8140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T08:36:30.129 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:36:29 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:36:29.835+0000 7fd2875f8140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T08:36:30.129 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:36:30 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:36:30.127+0000 7fd2875f8140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T08:36:30.141 INFO:journalctl@ceph.mgr.x.vm06.stdout:Mar 10 08:36:29 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-x[56265]: 2026-03-10T08:36:29.848+0000 7fe5027c8140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T08:36:30.434 INFO:journalctl@ceph.mgr.x.vm06.stdout:Mar 10 08:36:30 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-x[56265]: 2026-03-10T08:36:30.140+0000 7fe5027c8140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T08:36:30.434 INFO:journalctl@ceph.mgr.x.vm06.stdout:Mar 10 08:36:30 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-x[56265]: 2026-03-10T08:36:30.179+0000 7fe5027c8140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T08:36:30.434 INFO:journalctl@ceph.mgr.x.vm06.stdout:Mar 10 08:36:30 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-x[56265]: 2026-03-10T08:36:30.222+0000 7fe5027c8140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T08:36:30.434 INFO:journalctl@ceph.mgr.x.vm06.stdout:Mar 10 08:36:30 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-x[56265]: 2026-03-10T08:36:30.304+0000 7fe5027c8140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T08:36:30.434 INFO:journalctl@ceph.mgr.x.vm06.stdout:Mar 10 08:36:30 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-x[56265]: 2026-03-10T08:36:30.344+0000 7fe5027c8140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T08:36:30.714 INFO:journalctl@ceph.mgr.x.vm06.stdout:Mar 10 08:36:30 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-x[56265]: 2026-03-10T08:36:30.433+0000 7fe5027c8140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T08:36:30.714 INFO:journalctl@ceph.mgr.x.vm06.stdout:Mar 10 08:36:30 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-x[56265]: 2026-03-10T08:36:30.558+0000 7fe5027c8140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T08:36:30.793 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:36:30 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:36:30.504+0000 7fd2875f8140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T08:36:30.794 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:36:30 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:36:30.552+0000 7fd2875f8140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T08:36:30.794 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:36:30 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:36:30.624+0000 7fd2875f8140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T08:36:30.794 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:36:30 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:36:30.736+0000 7fd2875f8140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T08:36:30.794 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:36:30 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:36:30.791+0000 7fd2875f8140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T08:36:31.043 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:30 vm03 ceph-mon[50703]: Standby manager daemon x restarted 2026-03-10T08:36:31.044 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:30 vm03 ceph-mon[50703]: Standby manager daemon x started 2026-03-10T08:36:31.044 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:30 vm03 ceph-mon[50703]: from='mgr.? v1:192.168.123.106:0/184704273' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T08:36:31.044 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:30 vm03 ceph-mon[50703]: from='mgr.? v1:192.168.123.106:0/184704273' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T08:36:31.044 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:30 vm03 ceph-mon[50703]: from='mgr.? v1:192.168.123.106:0/184704273' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T08:36:31.044 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:30 vm03 ceph-mon[50703]: from='mgr.? v1:192.168.123.106:0/184704273' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T08:36:31.044 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:30 vm03 ceph-mon[57160]: Standby manager daemon x restarted 2026-03-10T08:36:31.044 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:30 vm03 ceph-mon[57160]: Standby manager daemon x started 2026-03-10T08:36:31.044 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:30 vm03 ceph-mon[57160]: from='mgr.? v1:192.168.123.106:0/184704273' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T08:36:31.044 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:30 vm03 ceph-mon[57160]: from='mgr.? v1:192.168.123.106:0/184704273' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T08:36:31.044 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:30 vm03 ceph-mon[57160]: from='mgr.? v1:192.168.123.106:0/184704273' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T08:36:31.044 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:30 vm03 ceph-mon[57160]: from='mgr.? v1:192.168.123.106:0/184704273' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T08:36:31.044 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:36:30 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:36:30.905+0000 7fd2875f8140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T08:36:31.044 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:36:31 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:36:31.041+0000 7fd2875f8140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T08:36:31.089 INFO:journalctl@ceph.mgr.x.vm06.stdout:Mar 10 08:36:30 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-x[56265]: 2026-03-10T08:36:30.712+0000 7fe5027c8140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T08:36:31.089 INFO:journalctl@ceph.mgr.x.vm06.stdout:Mar 10 08:36:30 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-x[56265]: 2026-03-10T08:36:30.753+0000 7fe5027c8140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T08:36:31.089 INFO:journalctl@ceph.mgr.x.vm06.stdout:Mar 10 08:36:30 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-x[56265]: [10/Mar/2026:08:36:30] ENGINE Bus STARTING 2026-03-10T08:36:31.089 INFO:journalctl@ceph.mgr.x.vm06.stdout:Mar 10 08:36:30 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-x[56265]: CherryPy Checker: 2026-03-10T08:36:31.089 INFO:journalctl@ceph.mgr.x.vm06.stdout:Mar 10 08:36:30 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-x[56265]: The Application mounted at '' has an empty config. 2026-03-10T08:36:31.089 INFO:journalctl@ceph.mgr.x.vm06.stdout:Mar 10 08:36:30 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-x[56265]: 2026-03-10T08:36:31.089 INFO:journalctl@ceph.mgr.x.vm06.stdout:Mar 10 08:36:30 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-x[56265]: [10/Mar/2026:08:36:30] ENGINE Serving on http://:::9283 2026-03-10T08:36:31.089 INFO:journalctl@ceph.mgr.x.vm06.stdout:Mar 10 08:36:30 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-x[56265]: [10/Mar/2026:08:36:30] ENGINE Bus STARTED 2026-03-10T08:36:31.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:30 vm06 ceph-mon[54477]: Standby manager daemon x restarted 2026-03-10T08:36:31.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:30 vm06 ceph-mon[54477]: Standby manager daemon x started 2026-03-10T08:36:31.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:30 vm06 ceph-mon[54477]: from='mgr.? v1:192.168.123.106:0/184704273' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T08:36:31.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:30 vm06 ceph-mon[54477]: from='mgr.? v1:192.168.123.106:0/184704273' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T08:36:31.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:30 vm06 ceph-mon[54477]: from='mgr.? v1:192.168.123.106:0/184704273' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T08:36:31.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:30 vm06 ceph-mon[54477]: from='mgr.? v1:192.168.123.106:0/184704273' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T08:36:31.342 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:36:31 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:36:31.201+0000 7fd2875f8140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T08:36:31.343 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:36:31 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:36:31.247+0000 7fd2875f8140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T08:36:31.678 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:36:31 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: [10/Mar/2026:08:36:31] ENGINE Bus STARTING 2026-03-10T08:36:31.678 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:36:31 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: CherryPy Checker: 2026-03-10T08:36:31.678 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:36:31 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: The Application mounted at '' has an empty config. 2026-03-10T08:36:31.678 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:36:31 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: 2026-03-10T08:36:31.678 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:36:31 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: [10/Mar/2026:08:36:31] ENGINE Serving on http://:::9283 2026-03-10T08:36:31.678 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:36:31 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: [10/Mar/2026:08:36:31] ENGINE Bus STARTED 2026-03-10T08:36:32.047 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:31 vm06 ceph-mon[54477]: mgrmap e18: y(active, since 2m), standbys: x 2026-03-10T08:36:32.047 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:31 vm06 ceph-mon[54477]: Active manager daemon y restarted 2026-03-10T08:36:32.048 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:31 vm06 ceph-mon[54477]: Activating manager daemon y 2026-03-10T08:36:32.048 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:31 vm06 ceph-mon[54477]: osdmap e56: 8 total, 8 up, 8 in 2026-03-10T08:36:32.048 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:31 vm06 ceph-mon[54477]: mgrmap e19: y(active, starting, since 0.0149885s), standbys: x 2026-03-10T08:36:32.048 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:31 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T08:36:32.048 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:31 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T08:36:32.048 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:31 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T08:36:32.048 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:31 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T08:36:32.048 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:31 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T08:36:32.048 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:31 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T08:36:32.048 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:31 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T08:36:32.048 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:31 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T08:36:32.048 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:31 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T08:36:32.048 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:31 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T08:36:32.048 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:31 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T08:36:32.048 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:31 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T08:36:32.048 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:31 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T08:36:32.048 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:31 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T08:36:32.048 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:31 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T08:36:32.048 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:31 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T08:36:32.048 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:31 vm06 ceph-mon[54477]: Manager daemon y is now available 2026-03-10T08:36:32.048 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:31 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:32.048 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:31 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:36:32.048 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:31 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:36:32.048 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:31 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T08:36:32.048 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:31 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T08:36:32.048 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:31 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T08:36:32.048 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:31 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T08:36:32.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:31 vm03 ceph-mon[57160]: mgrmap e18: y(active, since 2m), standbys: x 2026-03-10T08:36:32.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:31 vm03 ceph-mon[57160]: Active manager daemon y restarted 2026-03-10T08:36:32.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:31 vm03 ceph-mon[57160]: Activating manager daemon y 2026-03-10T08:36:32.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:31 vm03 ceph-mon[57160]: osdmap e56: 8 total, 8 up, 8 in 2026-03-10T08:36:32.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:31 vm03 ceph-mon[57160]: mgrmap e19: y(active, starting, since 0.0149885s), standbys: x 2026-03-10T08:36:32.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:31 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T08:36:32.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:31 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T08:36:32.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:31 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T08:36:32.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:31 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T08:36:32.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:31 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T08:36:32.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:31 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T08:36:32.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:31 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T08:36:32.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:31 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T08:36:32.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:31 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T08:36:32.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:31 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T08:36:32.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:31 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T08:36:32.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:31 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T08:36:32.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:31 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T08:36:32.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:31 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T08:36:32.179 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:31 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T08:36:32.179 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:31 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T08:36:32.179 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:31 vm03 ceph-mon[57160]: Manager daemon y is now available 2026-03-10T08:36:32.179 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:31 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:32.179 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:31 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:36:32.179 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:31 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:36:32.179 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:31 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T08:36:32.179 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:31 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T08:36:32.179 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:31 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T08:36:32.179 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:31 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T08:36:32.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:31 vm03 ceph-mon[50703]: mgrmap e18: y(active, since 2m), standbys: x 2026-03-10T08:36:32.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:31 vm03 ceph-mon[50703]: Active manager daemon y restarted 2026-03-10T08:36:32.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:31 vm03 ceph-mon[50703]: Activating manager daemon y 2026-03-10T08:36:32.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:31 vm03 ceph-mon[50703]: osdmap e56: 8 total, 8 up, 8 in 2026-03-10T08:36:32.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:31 vm03 ceph-mon[50703]: mgrmap e19: y(active, starting, since 0.0149885s), standbys: x 2026-03-10T08:36:32.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:31 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T08:36:32.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:31 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T08:36:32.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:31 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T08:36:32.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:31 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T08:36:32.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:31 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T08:36:32.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:31 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T08:36:32.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:31 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T08:36:32.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:31 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T08:36:32.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:31 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T08:36:32.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:31 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T08:36:32.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:31 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T08:36:32.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:31 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T08:36:32.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:31 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T08:36:32.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:31 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T08:36:32.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:31 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T08:36:32.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:31 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T08:36:32.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:31 vm03 ceph-mon[50703]: Manager daemon y is now available 2026-03-10T08:36:32.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:31 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:32.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:31 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:36:32.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:31 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:36:32.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:31 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T08:36:32.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:31 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T08:36:32.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:31 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T08:36:32.179 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:31 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T08:36:32.340 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:36:32 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a[78511]: debug there is no tcmu-runner data available 2026-03-10T08:36:33.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:33 vm06 ceph-mon[54477]: [10/Mar/2026:08:36:32] ENGINE Bus STARTING 2026-03-10T08:36:33.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:33 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:33.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:33 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:33.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:33 vm06 ceph-mon[54477]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:36:33.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:33 vm06 ceph-mon[54477]: pgmap v3: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:36:33.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:33 vm06 ceph-mon[54477]: mgrmap e20: y(active, since 1.03437s), standbys: x 2026-03-10T08:36:33.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:33 vm06 ceph-mon[54477]: [10/Mar/2026:08:36:32] ENGINE Serving on http://192.168.123.103:8765 2026-03-10T08:36:33.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:33 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:33.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:33 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:33.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:33 vm06 ceph-mon[54477]: [10/Mar/2026:08:36:32] ENGINE Serving on https://192.168.123.103:7150 2026-03-10T08:36:33.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:33 vm06 ceph-mon[54477]: [10/Mar/2026:08:36:32] ENGINE Bus STARTED 2026-03-10T08:36:33.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:33 vm06 ceph-mon[54477]: [10/Mar/2026:08:36:32] ENGINE Client ('192.168.123.103', 45868) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T08:36:33.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:33 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:33.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:33 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:33.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:33 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-10T08:36:33.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:33 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-10T08:36:33.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:33 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:33.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:33 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:33.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:33 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T08:36:33.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:33 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T08:36:33.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:33 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:36:33.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:33 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:36:33.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:33 vm03 ceph-mon[50703]: [10/Mar/2026:08:36:32] ENGINE Bus STARTING 2026-03-10T08:36:33.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:33 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:33.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:33 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:33.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:33 vm03 ceph-mon[50703]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:36:33.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:33 vm03 ceph-mon[50703]: pgmap v3: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:36:33.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:33 vm03 ceph-mon[50703]: mgrmap e20: y(active, since 1.03437s), standbys: x 2026-03-10T08:36:33.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:33 vm03 ceph-mon[50703]: [10/Mar/2026:08:36:32] ENGINE Serving on http://192.168.123.103:8765 2026-03-10T08:36:33.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:33 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:33.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:33 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:33.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:33 vm03 ceph-mon[50703]: [10/Mar/2026:08:36:32] ENGINE Serving on https://192.168.123.103:7150 2026-03-10T08:36:33.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:33 vm03 ceph-mon[50703]: [10/Mar/2026:08:36:32] ENGINE Bus STARTED 2026-03-10T08:36:33.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:33 vm03 ceph-mon[50703]: [10/Mar/2026:08:36:32] ENGINE Client ('192.168.123.103', 45868) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T08:36:33.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:33 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:33.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:33 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:33.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:33 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-10T08:36:33.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:33 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-10T08:36:33.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:33 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:33.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:33 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:33.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:33 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T08:36:33.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:33 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T08:36:33.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:33 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:36:33.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:33 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:36:33.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:33 vm03 ceph-mon[57160]: [10/Mar/2026:08:36:32] ENGINE Bus STARTING 2026-03-10T08:36:33.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:33 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:33.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:33 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:33.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:33 vm03 ceph-mon[57160]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:36:33.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:33 vm03 ceph-mon[57160]: pgmap v3: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:36:33.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:33 vm03 ceph-mon[57160]: mgrmap e20: y(active, since 1.03437s), standbys: x 2026-03-10T08:36:33.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:33 vm03 ceph-mon[57160]: [10/Mar/2026:08:36:32] ENGINE Serving on http://192.168.123.103:8765 2026-03-10T08:36:33.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:33 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:33.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:33 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:33.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:33 vm03 ceph-mon[57160]: [10/Mar/2026:08:36:32] ENGINE Serving on https://192.168.123.103:7150 2026-03-10T08:36:33.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:33 vm03 ceph-mon[57160]: [10/Mar/2026:08:36:32] ENGINE Bus STARTED 2026-03-10T08:36:33.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:33 vm03 ceph-mon[57160]: [10/Mar/2026:08:36:32] ENGINE Client ('192.168.123.103', 45868) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T08:36:33.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:33 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:33.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:33 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:33.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:33 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-10T08:36:33.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:33 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-10T08:36:33.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:33 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:33.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:33 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:33.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:33 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T08:36:33.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:33 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T08:36:33.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:33 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:36:33.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:33 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:36:34.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:34 vm06 ceph-mon[54477]: Updating vm03:/etc/ceph/ceph.conf 2026-03-10T08:36:34.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:34 vm06 ceph-mon[54477]: Updating vm06:/etc/ceph/ceph.conf 2026-03-10T08:36:34.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:34 vm06 ceph-mon[54477]: Updating vm06:/var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/config/ceph.conf 2026-03-10T08:36:34.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:34 vm06 ceph-mon[54477]: Updating vm03:/var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/config/ceph.conf 2026-03-10T08:36:34.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:34 vm06 ceph-mon[54477]: Updating vm06:/etc/ceph/ceph.client.admin.keyring 2026-03-10T08:36:34.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:34 vm06 ceph-mon[54477]: Updating vm03:/etc/ceph/ceph.client.admin.keyring 2026-03-10T08:36:34.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:34 vm06 ceph-mon[54477]: Updating vm06:/var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/config/ceph.client.admin.keyring 2026-03-10T08:36:34.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:34 vm06 ceph-mon[54477]: Updating vm03:/var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/config/ceph.client.admin.keyring 2026-03-10T08:36:34.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:34 vm06 ceph-mon[54477]: pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:36:34.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:34 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:34.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:34 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:34.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:34 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:34.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:34 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:34.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:34 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:34.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:34 vm06 ceph-mon[54477]: mgrmap e21: y(active, since 3s), standbys: x 2026-03-10T08:36:34.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:34 vm03 ceph-mon[50703]: Updating vm03:/etc/ceph/ceph.conf 2026-03-10T08:36:34.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:34 vm03 ceph-mon[50703]: Updating vm06:/etc/ceph/ceph.conf 2026-03-10T08:36:34.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:34 vm03 ceph-mon[50703]: Updating vm06:/var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/config/ceph.conf 2026-03-10T08:36:34.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:34 vm03 ceph-mon[50703]: Updating vm03:/var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/config/ceph.conf 2026-03-10T08:36:34.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:34 vm03 ceph-mon[50703]: Updating vm06:/etc/ceph/ceph.client.admin.keyring 2026-03-10T08:36:34.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:34 vm03 ceph-mon[50703]: Updating vm03:/etc/ceph/ceph.client.admin.keyring 2026-03-10T08:36:34.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:34 vm03 ceph-mon[50703]: Updating vm06:/var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/config/ceph.client.admin.keyring 2026-03-10T08:36:34.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:34 vm03 ceph-mon[50703]: Updating vm03:/var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/config/ceph.client.admin.keyring 2026-03-10T08:36:34.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:34 vm03 ceph-mon[50703]: pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:36:34.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:34 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:34.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:34 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:34.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:34 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:34.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:34 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:34.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:34 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:34.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:34 vm03 ceph-mon[50703]: mgrmap e21: y(active, since 3s), standbys: x 2026-03-10T08:36:34.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:34 vm03 ceph-mon[57160]: Updating vm03:/etc/ceph/ceph.conf 2026-03-10T08:36:34.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:34 vm03 ceph-mon[57160]: Updating vm06:/etc/ceph/ceph.conf 2026-03-10T08:36:34.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:34 vm03 ceph-mon[57160]: Updating vm06:/var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/config/ceph.conf 2026-03-10T08:36:34.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:34 vm03 ceph-mon[57160]: Updating vm03:/var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/config/ceph.conf 2026-03-10T08:36:34.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:34 vm03 ceph-mon[57160]: Updating vm06:/etc/ceph/ceph.client.admin.keyring 2026-03-10T08:36:34.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:34 vm03 ceph-mon[57160]: Updating vm03:/etc/ceph/ceph.client.admin.keyring 2026-03-10T08:36:34.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:34 vm03 ceph-mon[57160]: Updating vm06:/var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/config/ceph.client.admin.keyring 2026-03-10T08:36:34.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:34 vm03 ceph-mon[57160]: Updating vm03:/var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/config/ceph.client.admin.keyring 2026-03-10T08:36:34.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:34 vm03 ceph-mon[57160]: pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:36:34.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:34 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:34.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:34 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:34.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:34 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:34.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:34 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:34.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:34 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:34.679 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:34 vm03 ceph-mon[57160]: mgrmap e21: y(active, since 3s), standbys: x 2026-03-10T08:36:35.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:35 vm06 ceph-mon[54477]: Deploying daemon alertmanager.a on vm03 2026-03-10T08:36:35.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:35 vm03 ceph-mon[50703]: Deploying daemon alertmanager.a on vm03 2026-03-10T08:36:35.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:35 vm03 ceph-mon[57160]: Deploying daemon alertmanager.a on vm03 2026-03-10T08:36:36.478 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:36 vm03 ceph-mon[50703]: pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:36:36.478 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:36 vm03 ceph-mon[50703]: mgrmap e22: y(active, since 4s), standbys: x 2026-03-10T08:36:36.478 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:36 vm03 ceph-mon[57160]: pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:36:36.478 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:36 vm03 ceph-mon[57160]: mgrmap e22: y(active, since 4s), standbys: x 2026-03-10T08:36:36.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:36 vm06 ceph-mon[54477]: pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:36:36.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:36 vm06 ceph-mon[54477]: mgrmap e22: y(active, since 4s), standbys: x 2026-03-10T08:36:36.928 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 10 08:36:36 vm03 systemd[1]: Starting Ceph alertmanager.a for aaf0329a-1c5b-11f1-8b6f-7f2d819bb543... 2026-03-10T08:36:37.428 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:36:37 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: [10/Mar/2026:08:36:37] ENGINE Bus STOPPING 2026-03-10T08:36:37.429 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 10 08:36:36 vm03 podman[85415]: 2026-03-10 08:36:36.872244436 +0000 UTC m=+0.010289938 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0 2026-03-10T08:36:37.429 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 10 08:36:36 vm03 podman[85415]: 2026-03-10 08:36:36.986657273 +0000 UTC m=+0.124702765 volume create acfc2440fdd7247c24babe696f00f3cbe820183796293db1698df5ef1f8edd78 2026-03-10T08:36:37.429 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 10 08:36:36 vm03 podman[85415]: 2026-03-10 08:36:36.990382726 +0000 UTC m=+0.128428228 container create 7e0204b4c6ab4516eb314e3b876d1d06c70817fcf32fde086e612633e331487a (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-alertmanager-a, maintainer=The Prometheus Authors ) 2026-03-10T08:36:37.429 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 10 08:36:37 vm03 podman[85415]: 2026-03-10 08:36:37.022005515 +0000 UTC m=+0.160051017 container init 7e0204b4c6ab4516eb314e3b876d1d06c70817fcf32fde086e612633e331487a (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-alertmanager-a, maintainer=The Prometheus Authors ) 2026-03-10T08:36:37.429 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 10 08:36:37 vm03 podman[85415]: 2026-03-10 08:36:37.024822448 +0000 UTC m=+0.162867950 container start 7e0204b4c6ab4516eb314e3b876d1d06c70817fcf32fde086e612633e331487a (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-alertmanager-a, maintainer=The Prometheus Authors ) 2026-03-10T08:36:37.429 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 10 08:36:37 vm03 bash[85415]: 7e0204b4c6ab4516eb314e3b876d1d06c70817fcf32fde086e612633e331487a 2026-03-10T08:36:37.429 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 10 08:36:37 vm03 systemd[1]: Started Ceph alertmanager.a for aaf0329a-1c5b-11f1-8b6f-7f2d819bb543. 2026-03-10T08:36:37.429 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 10 08:36:37 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-alertmanager-a[85425]: ts=2026-03-10T08:36:37.049Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)" 2026-03-10T08:36:37.429 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 10 08:36:37 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-alertmanager-a[85425]: ts=2026-03-10T08:36:37.049Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)" 2026-03-10T08:36:37.429 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 10 08:36:37 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-alertmanager-a[85425]: ts=2026-03-10T08:36:37.050Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.123.103 port=9094 2026-03-10T08:36:37.429 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 10 08:36:37 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-alertmanager-a[85425]: ts=2026-03-10T08:36:37.056Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s 2026-03-10T08:36:37.429 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 10 08:36:37 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-alertmanager-a[85425]: ts=2026-03-10T08:36:37.081Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-10T08:36:37.429 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 10 08:36:37 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-alertmanager-a[85425]: ts=2026-03-10T08:36:37.081Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-10T08:36:37.429 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 10 08:36:37 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-alertmanager-a[85425]: ts=2026-03-10T08:36:37.083Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9093 2026-03-10T08:36:37.429 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 10 08:36:37 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-alertmanager-a[85425]: ts=2026-03-10T08:36:37.083Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=[::]:9093 2026-03-10T08:36:37.928 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:36:37 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: [10/Mar/2026:08:36:37] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-10T08:36:37.928 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:36:37 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: [10/Mar/2026:08:36:37] ENGINE Bus STOPPED 2026-03-10T08:36:37.928 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:36:37 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: [10/Mar/2026:08:36:37] ENGINE Bus STARTING 2026-03-10T08:36:37.928 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:36:37 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: [10/Mar/2026:08:36:37] ENGINE Serving on http://:::9283 2026-03-10T08:36:37.928 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:36:37 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: [10/Mar/2026:08:36:37] ENGINE Bus STARTED 2026-03-10T08:36:38.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:38 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:38.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:38 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:38.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:38 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:38.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:38 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:38.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:38 vm06 ceph-mon[54477]: Regenerating cephadm self-signed grafana TLS certificates 2026-03-10T08:36:38.340 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:38 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:38.340 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:38 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:38.340 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:38 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T08:36:38.340 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:38 vm06 ceph-mon[54477]: from='mon.? v1:192.168.123.106:0/355593048' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T08:36:38.340 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:38 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:38.340 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:38 vm06 ceph-mon[54477]: Deploying daemon grafana.a on vm06 2026-03-10T08:36:38.340 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:38 vm06 ceph-mon[54477]: pgmap v6: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 36 KiB/s rd, 0 B/s wr, 15 op/s 2026-03-10T08:36:38.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:38 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:38.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:38 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:38.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:38 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:38.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:38 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:38.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:38 vm03 ceph-mon[50703]: Regenerating cephadm self-signed grafana TLS certificates 2026-03-10T08:36:38.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:38 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:38.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:38 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:38.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:38 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T08:36:38.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:38 vm03 ceph-mon[50703]: from='mon.? v1:192.168.123.106:0/355593048' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T08:36:38.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:38 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:38.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:38 vm03 ceph-mon[50703]: Deploying daemon grafana.a on vm06 2026-03-10T08:36:38.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:38 vm03 ceph-mon[50703]: pgmap v6: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 36 KiB/s rd, 0 B/s wr, 15 op/s 2026-03-10T08:36:38.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:38 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:38.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:38 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:38.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:38 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:38.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:38 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:38.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:38 vm03 ceph-mon[57160]: Regenerating cephadm self-signed grafana TLS certificates 2026-03-10T08:36:38.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:38 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:38.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:38 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:38.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:38 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T08:36:38.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:38 vm03 ceph-mon[57160]: from='mon.? v1:192.168.123.106:0/355593048' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T08:36:38.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:38 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:38.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:38 vm03 ceph-mon[57160]: Deploying daemon grafana.a on vm06 2026-03-10T08:36:38.429 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:38 vm03 ceph-mon[57160]: pgmap v6: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 36 KiB/s rd, 0 B/s wr, 15 op/s 2026-03-10T08:36:39.428 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 10 08:36:39 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-alertmanager-a[85425]: ts=2026-03-10T08:36:39.057Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000417802s 2026-03-10T08:36:39.928 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:36:39 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: ::ffff:192.168.123.106 - - [10/Mar/2026:08:36:39] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T08:36:40.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:40 vm03 ceph-mon[50703]: pgmap v7: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 10 op/s 2026-03-10T08:36:40.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:40 vm03 ceph-mon[57160]: pgmap v7: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 10 op/s 2026-03-10T08:36:40.840 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:40 vm06 ceph-mon[54477]: pgmap v7: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 10 op/s 2026-03-10T08:36:42.589 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:36:42 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a[78511]: debug there is no tcmu-runner data available 2026-03-10T08:36:42.866 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:42 vm06 ceph-mon[54477]: pgmap v8: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T08:36:42.866 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:42 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:42.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:42 vm03 ceph-mon[57160]: pgmap v8: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T08:36:42.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:42 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:42.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:42 vm03 ceph-mon[50703]: pgmap v8: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T08:36:42.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:42 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:43.412 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 systemd[1]: Starting Ceph grafana.a for aaf0329a-1c5b-11f1-8b6f-7f2d819bb543... 2026-03-10T08:36:43.665 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:43 vm06 ceph-mon[54477]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:36:43.665 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:43 vm06 ceph-mon[54477]: pgmap v9: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T08:36:43.665 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:43 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:43.665 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:43 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:43.665 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:43 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:43.665 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:43 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:43.665 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 podman[80566]: 2026-03-10 08:36:43.410748115 +0000 UTC m=+0.022144339 container create 5df7ddb3dabb26331b64b3d22e4d7621ea6b0f000922d8ed4a999cb8a38dcaad (image=quay.io/ceph/grafana:10.4.0, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a, maintainer=Grafana Labs ) 2026-03-10T08:36:43.665 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 podman[80566]: 2026-03-10 08:36:43.448469261 +0000 UTC m=+0.059865475 container init 5df7ddb3dabb26331b64b3d22e4d7621ea6b0f000922d8ed4a999cb8a38dcaad (image=quay.io/ceph/grafana:10.4.0, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a, maintainer=Grafana Labs ) 2026-03-10T08:36:43.665 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 podman[80566]: 2026-03-10 08:36:43.45252423 +0000 UTC m=+0.063920455 container start 5df7ddb3dabb26331b64b3d22e4d7621ea6b0f000922d8ed4a999cb8a38dcaad (image=quay.io/ceph/grafana:10.4.0, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a, maintainer=Grafana Labs ) 2026-03-10T08:36:43.665 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 bash[80566]: 5df7ddb3dabb26331b64b3d22e4d7621ea6b0f000922d8ed4a999cb8a38dcaad 2026-03-10T08:36:43.665 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 podman[80566]: 2026-03-10 08:36:43.401007073 +0000 UTC m=+0.012403307 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0 2026-03-10T08:36:43.665 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 systemd[1]: Started Ceph grafana.a for aaf0329a-1c5b-11f1-8b6f-7f2d819bb543. 2026-03-10T08:36:43.665 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=settings t=2026-03-10T08:36:43.556286761Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2026-03-10T08:36:43Z 2026-03-10T08:36:43.665 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=settings t=2026-03-10T08:36:43.557154615Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini 2026-03-10T08:36:43.665 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=settings t=2026-03-10T08:36:43.557252999Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini 2026-03-10T08:36:43.665 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=settings t=2026-03-10T08:36:43.55729591Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" 2026-03-10T08:36:43.665 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=settings t=2026-03-10T08:36:43.557333269Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" 2026-03-10T08:36:43.665 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=settings t=2026-03-10T08:36:43.557368686Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" 2026-03-10T08:36:43.665 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=settings t=2026-03-10T08:36:43.557404152Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" 2026-03-10T08:36:43.665 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=settings t=2026-03-10T08:36:43.557443455Z level=info msg="Config overridden from command line" arg="default.log.mode=console" 2026-03-10T08:36:43.665 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=settings t=2026-03-10T08:36:43.557479743Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" 2026-03-10T08:36:43.666 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=settings t=2026-03-10T08:36:43.557518767Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" 2026-03-10T08:36:43.666 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=settings t=2026-03-10T08:36:43.557553862Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 2026-03-10T08:36:43.666 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=settings t=2026-03-10T08:36:43.557588326Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 2026-03-10T08:36:43.666 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=settings t=2026-03-10T08:36:43.557635895Z level=info msg=Target target=[all] 2026-03-10T08:36:43.666 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=settings t=2026-03-10T08:36:43.557683604Z level=info msg="Path Home" path=/usr/share/grafana 2026-03-10T08:36:43.666 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=settings t=2026-03-10T08:36:43.557720674Z level=info msg="Path Data" path=/var/lib/grafana 2026-03-10T08:36:43.666 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=settings t=2026-03-10T08:36:43.557755309Z level=info msg="Path Logs" path=/var/log/grafana 2026-03-10T08:36:43.666 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=settings t=2026-03-10T08:36:43.557788631Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins 2026-03-10T08:36:43.666 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=settings t=2026-03-10T08:36:43.557833916Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning 2026-03-10T08:36:43.666 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=settings t=2026-03-10T08:36:43.557880372Z level=info msg="App mode production" 2026-03-10T08:36:43.666 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=sqlstore t=2026-03-10T08:36:43.558084435Z level=info msg="Connecting to DB" dbtype=sqlite3 2026-03-10T08:36:43.666 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=sqlstore t=2026-03-10T08:36:43.55814575Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r----- 2026-03-10T08:36:43.666 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.558852512Z level=info msg="Starting DB migrations" 2026-03-10T08:36:43.666 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.559685522Z level=info msg="Executing migration" id="create migration_log table" 2026-03-10T08:36:43.666 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.560521527Z level=info msg="Migration successfully executed" id="create migration_log table" duration=835.885µs 2026-03-10T08:36:43.666 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.561261341Z level=info msg="Executing migration" id="create user table" 2026-03-10T08:36:43.666 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.561702517Z level=info msg="Migration successfully executed" id="create user table" duration=441.465µs 2026-03-10T08:36:43.666 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.56229714Z level=info msg="Executing migration" id="add unique index user.login" 2026-03-10T08:36:43.666 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.562769905Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=471.554µs 2026-03-10T08:36:43.666 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.563423007Z level=info msg="Executing migration" id="add unique index user.email" 2026-03-10T08:36:43.666 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.563897957Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=474.439µs 2026-03-10T08:36:43.666 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.564523026Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" 2026-03-10T08:36:43.666 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.564959975Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=436.949µs 2026-03-10T08:36:43.666 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.565530792Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" 2026-03-10T08:36:43.666 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.565946711Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=415.849µs 2026-03-10T08:36:43.666 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.56654976Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" 2026-03-10T08:36:43.666 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.567590439Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=1.040528ms 2026-03-10T08:36:43.666 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.568256405Z level=info msg="Executing migration" id="create user table v2" 2026-03-10T08:36:43.666 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.568690768Z level=info msg="Migration successfully executed" id="create user table v2" duration=434.393µs 2026-03-10T08:36:43.666 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.569230808Z level=info msg="Executing migration" id="create index UQE_user_login - v2" 2026-03-10T08:36:43.666 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.569654051Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=423.162µs 2026-03-10T08:36:43.666 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.570183331Z level=info msg="Executing migration" id="create index UQE_user_email - v2" 2026-03-10T08:36:43.666 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.570591576Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=407.013µs 2026-03-10T08:36:43.666 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.571175268Z level=info msg="Executing migration" id="copy data_source v1 to v2" 2026-03-10T08:36:43.666 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.571433632Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=258.504µs 2026-03-10T08:36:43.666 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.57195082Z level=info msg="Executing migration" id="Drop old table user_v1" 2026-03-10T08:36:43.666 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.572277121Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=326.361µs 2026-03-10T08:36:43.666 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.572820808Z level=info msg="Executing migration" id="Add column help_flags1 to user table" 2026-03-10T08:36:43.666 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.573351201Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=530.242µs 2026-03-10T08:36:43.666 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.573843672Z level=info msg="Executing migration" id="Update user table charset" 2026-03-10T08:36:43.666 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.573923842Z level=info msg="Migration successfully executed" id="Update user table charset" duration=80.611µs 2026-03-10T08:36:43.666 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.574564161Z level=info msg="Executing migration" id="Add last_seen_at column to user" 2026-03-10T08:36:43.666 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.575090968Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=526.776µs 2026-03-10T08:36:43.666 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.575592866Z level=info msg="Executing migration" id="Add missing user data" 2026-03-10T08:36:43.666 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.575783293Z level=info msg="Migration successfully executed" id="Add missing user data" duration=190.336µs 2026-03-10T08:36:43.666 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.576360323Z level=info msg="Executing migration" id="Add is_disabled column to user" 2026-03-10T08:36:43.666 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.576922947Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=562.473µs 2026-03-10T08:36:43.666 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.57743794Z level=info msg="Executing migration" id="Add index user.login/user.email" 2026-03-10T08:36:43.666 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.577862386Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=424.314µs 2026-03-10T08:36:43.666 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.578380455Z level=info msg="Executing migration" id="Add is_service_account column to user" 2026-03-10T08:36:43.666 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.578940573Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=560.079µs 2026-03-10T08:36:43.666 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.579440889Z level=info msg="Executing migration" id="Update is_service_account column to nullable" 2026-03-10T08:36:43.666 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.582486029Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=3.044679ms 2026-03-10T08:36:43.666 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.583088396Z level=info msg="Executing migration" id="Add uid column to user" 2026-03-10T08:36:43.666 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.583678662Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=591.197µs 2026-03-10T08:36:43.666 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.584208042Z level=info msg="Executing migration" id="Update uid column values for users" 2026-03-10T08:36:43.666 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.584387138Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=179.516µs 2026-03-10T08:36:43.666 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.584952656Z level=info msg="Executing migration" id="Add unique index user_uid" 2026-03-10T08:36:43.666 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.585375577Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=422.53µs 2026-03-10T08:36:43.667 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.587306251Z level=info msg="Executing migration" id="create temp user table v1-7" 2026-03-10T08:36:43.667 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.587804835Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=498.825µs 2026-03-10T08:36:43.667 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.589006504Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" 2026-03-10T08:36:43.667 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.589729578Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=724.657µs 2026-03-10T08:36:43.667 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.607240394Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" 2026-03-10T08:36:43.667 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.608254272Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=1.019709ms 2026-03-10T08:36:43.667 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.628241712Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" 2026-03-10T08:36:43.667 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.628947364Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=706.042µs 2026-03-10T08:36:43.667 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.637250886Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" 2026-03-10T08:36:43.667 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.637771722Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=522.728µs 2026-03-10T08:36:43.667 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.638291053Z level=info msg="Executing migration" id="Update temp_user table charset" 2026-03-10T08:36:43.667 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.638303877Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=13.155µs 2026-03-10T08:36:43.667 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.638868925Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" 2026-03-10T08:36:43.667 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.639248245Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=379.47µs 2026-03-10T08:36:43.667 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.639743132Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" 2026-03-10T08:36:43.667 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.640700974Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=958.004µs 2026-03-10T08:36:43.667 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.641358043Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" 2026-03-10T08:36:43.667 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.641908504Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=550.731µs 2026-03-10T08:36:43.667 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.642490855Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" 2026-03-10T08:36:43.667 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.642983486Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=492.521µs 2026-03-10T08:36:43.667 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.643468374Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" 2026-03-10T08:36:43.667 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.64516555Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=1.696575ms 2026-03-10T08:36:43.667 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.645758801Z level=info msg="Executing migration" id="create temp_user v2" 2026-03-10T08:36:43.667 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.646200598Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=441.627µs 2026-03-10T08:36:43.667 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.646718868Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" 2026-03-10T08:36:43.667 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.647166065Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=447.216µs 2026-03-10T08:36:43.667 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.647719601Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" 2026-03-10T08:36:43.667 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.64807186Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=352.118µs 2026-03-10T08:36:43.667 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.648564913Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" 2026-03-10T08:36:43.667 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.648947208Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=382.105µs 2026-03-10T08:36:43.667 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.649377044Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" 2026-03-10T08:36:43.667 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.649736907Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=359.613µs 2026-03-10T08:36:43.667 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.650227586Z level=info msg="Executing migration" id="copy temp_user v1 to v2" 2026-03-10T08:36:43.667 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.650562472Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=334.676µs 2026-03-10T08:36:43.667 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.651123934Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" 2026-03-10T08:36:43.667 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.651390993Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=267.251µs 2026-03-10T08:36:43.667 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.651867755Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" 2026-03-10T08:36:43.667 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.652070686Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=202.68µs 2026-03-10T08:36:43.667 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.652595317Z level=info msg="Executing migration" id="create star table" 2026-03-10T08:36:43.667 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.65292761Z level=info msg="Migration successfully executed" id="create star table" duration=332.133µs 2026-03-10T08:36:43.667 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.653372622Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" 2026-03-10T08:36:43.667 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.653736183Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=363.4µs 2026-03-10T08:36:43.667 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.654298596Z level=info msg="Executing migration" id="create org table v1" 2026-03-10T08:36:43.667 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.654703413Z level=info msg="Migration successfully executed" id="create org table v1" duration=405.058µs 2026-03-10T08:36:43.667 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.655226302Z level=info msg="Executing migration" id="create index UQE_org_name - v1" 2026-03-10T08:36:43.667 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.65562626Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=399.948µs 2026-03-10T08:36:43.667 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.656171241Z level=info msg="Executing migration" id="create org_user table v1" 2026-03-10T08:36:43.667 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.656510776Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=339.415µs 2026-03-10T08:36:43.667 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.657005262Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" 2026-03-10T08:36:43.667 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.657423414Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=418.313µs 2026-03-10T08:36:43.667 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.657897592Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" 2026-03-10T08:36:43.667 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.658347995Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=450.593µs 2026-03-10T08:36:43.667 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.658909015Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" 2026-03-10T08:36:43.667 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.659265172Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=356.397µs 2026-03-10T08:36:43.667 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.6597501Z level=info msg="Executing migration" id="Update org table charset" 2026-03-10T08:36:43.667 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.659762693Z level=info msg="Migration successfully executed" id="Update org table charset" duration=12.834µs 2026-03-10T08:36:43.667 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.66032187Z level=info msg="Executing migration" id="Update org_user table charset" 2026-03-10T08:36:43.667 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.660334362Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=13.135µs 2026-03-10T08:36:43.667 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.660713183Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" 2026-03-10T08:36:43.667 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.660801557Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=88.455µs 2026-03-10T08:36:43.667 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.661316211Z level=info msg="Executing migration" id="create dashboard table" 2026-03-10T08:36:43.667 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.661686534Z level=info msg="Migration successfully executed" id="create dashboard table" duration=370.283µs 2026-03-10T08:36:43.668 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.662172403Z level=info msg="Executing migration" id="add index dashboard.account_id" 2026-03-10T08:36:43.913 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.665593357Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=3.420372ms 2026-03-10T08:36:43.913 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.666200955Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" 2026-03-10T08:36:43.913 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.666593669Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=392.544µs 2026-03-10T08:36:43.913 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.667135213Z level=info msg="Executing migration" id="create dashboard_tag table" 2026-03-10T08:36:43.913 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.667442658Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=307.366µs 2026-03-10T08:36:43.914 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.66799454Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" 2026-03-10T08:36:43.914 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.668349155Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=354.544µs 2026-03-10T08:36:43.914 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.668836246Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" 2026-03-10T08:36:43.914 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.669186111Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=349.976µs 2026-03-10T08:36:43.914 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.669720932Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" 2026-03-10T08:36:43.914 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.671756522Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=2.035639ms 2026-03-10T08:36:43.914 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.672432047Z level=info msg="Executing migration" id="create dashboard v2" 2026-03-10T08:36:43.914 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.672858666Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=427.06µs 2026-03-10T08:36:43.914 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.673394739Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" 2026-03-10T08:36:43.914 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.673815788Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=419.346µs 2026-03-10T08:36:43.914 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.674402525Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" 2026-03-10T08:36:43.914 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.674873657Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=471.323µs 2026-03-10T08:36:43.914 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.675454243Z level=info msg="Executing migration" id="copy dashboard v1 to v2" 2026-03-10T08:36:43.914 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.67577813Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=323.767µs 2026-03-10T08:36:43.914 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.676316778Z level=info msg="Executing migration" id="drop table dashboard_v1" 2026-03-10T08:36:43.914 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.676887527Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=571.15µs 2026-03-10T08:36:43.914 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.677448095Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" 2026-03-10T08:36:43.914 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.677555026Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=107.301µs 2026-03-10T08:36:43.914 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.678182701Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" 2026-03-10T08:36:43.914 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.678990382Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=808.082µs 2026-03-10T08:36:43.914 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.67953931Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" 2026-03-10T08:36:43.914 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.680261221Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=722.102µs 2026-03-10T08:36:43.914 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.680940824Z level=info msg="Executing migration" id="Add column gnetId in dashboard" 2026-03-10T08:36:43.914 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.681845987Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=906.617µs 2026-03-10T08:36:43.914 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.682447073Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" 2026-03-10T08:36:43.914 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.6829689Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=521.636µs 2026-03-10T08:36:43.914 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.68357259Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" 2026-03-10T08:36:43.914 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.684480238Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=907.648µs 2026-03-10T08:36:43.914 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.685124694Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" 2026-03-10T08:36:43.914 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.685607278Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=482.524µs 2026-03-10T08:36:43.914 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.686222019Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" 2026-03-10T08:36:43.914 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.687007548Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=784.988µs 2026-03-10T08:36:43.914 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.687745661Z level=info msg="Executing migration" id="Update dashboard table charset" 2026-03-10T08:36:43.914 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.687845467Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=98.765µs 2026-03-10T08:36:43.914 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.688441092Z level=info msg="Executing migration" id="Update dashboard_tag table charset" 2026-03-10T08:36:43.914 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.688517114Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=76.423µs 2026-03-10T08:36:43.914 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.689094105Z level=info msg="Executing migration" id="Add column folder_id in dashboard" 2026-03-10T08:36:43.914 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.689918507Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=823.981µs 2026-03-10T08:36:43.914 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.69360624Z level=info msg="Executing migration" id="Add column isFolder in dashboard" 2026-03-10T08:36:43.914 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.69435938Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=753.08µs 2026-03-10T08:36:43.914 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.694960996Z level=info msg="Executing migration" id="Add column has_acl in dashboard" 2026-03-10T08:36:43.914 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.69567872Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=717.443µs 2026-03-10T08:36:43.914 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.696200165Z level=info msg="Executing migration" id="Add column uid in dashboard" 2026-03-10T08:36:43.914 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.696903302Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=703.167µs 2026-03-10T08:36:43.914 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.697422814Z level=info msg="Executing migration" id="Update uid column values in dashboard" 2026-03-10T08:36:43.914 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.697553549Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=131.146µs 2026-03-10T08:36:43.915 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.698142461Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" 2026-03-10T08:36:43.915 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.698549163Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=406.261µs 2026-03-10T08:36:43.915 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.699108629Z level=info msg="Executing migration" id="Remove unique index org_id_slug" 2026-03-10T08:36:43.915 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.699474475Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=365.886µs 2026-03-10T08:36:43.915 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.700051755Z level=info msg="Executing migration" id="Update dashboard title length" 2026-03-10T08:36:43.915 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.700085879Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=35.917µs 2026-03-10T08:36:43.915 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.700668469Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" 2026-03-10T08:36:43.915 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.7010374Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=368.81µs 2026-03-10T08:36:43.915 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.701533278Z level=info msg="Executing migration" id="create dashboard_provisioning" 2026-03-10T08:36:43.915 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.701894774Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=361.417µs 2026-03-10T08:36:43.915 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.70241054Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" 2026-03-10T08:36:43.915 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.704153192Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=1.742522ms 2026-03-10T08:36:43.915 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.704672954Z level=info msg="Executing migration" id="create dashboard_provisioning v2" 2026-03-10T08:36:43.915 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.705000097Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=326.891µs 2026-03-10T08:36:43.915 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.705485235Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" 2026-03-10T08:36:43.915 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.705863542Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=378.188µs 2026-03-10T08:36:43.915 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.706312583Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" 2026-03-10T08:36:43.915 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.706968572Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=655.608µs 2026-03-10T08:36:43.915 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.707649566Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" 2026-03-10T08:36:43.915 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.707840623Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=191.289µs 2026-03-10T08:36:43.915 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.70839395Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" 2026-03-10T08:36:43.915 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.70877419Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=381.705µs 2026-03-10T08:36:43.915 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.709325954Z level=info msg="Executing migration" id="Add check_sum column" 2026-03-10T08:36:43.915 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.710268388Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=942.805µs 2026-03-10T08:36:43.915 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.710914878Z level=info msg="Executing migration" id="Add index for dashboard_title" 2026-03-10T08:36:43.915 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.711356224Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=441.165µs 2026-03-10T08:36:43.915 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.711945297Z level=info msg="Executing migration" id="delete tags for deleted dashboards" 2026-03-10T08:36:43.915 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.712066513Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=121.307µs 2026-03-10T08:36:43.915 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.712667719Z level=info msg="Executing migration" id="delete stars for deleted dashboards" 2026-03-10T08:36:43.915 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.712779949Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=112.861µs 2026-03-10T08:36:43.915 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.713399579Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" 2026-03-10T08:36:43.915 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.71383297Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=435.485µs 2026-03-10T08:36:43.915 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.714354836Z level=info msg="Executing migration" id="Add isPublic for dashboard" 2026-03-10T08:36:43.915 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.71515815Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=803.494µs 2026-03-10T08:36:43.915 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.715638219Z level=info msg="Executing migration" id="create data_source table" 2026-03-10T08:36:43.915 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.716059618Z level=info msg="Migration successfully executed" id="create data_source table" duration=437.097µs 2026-03-10T08:36:43.915 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.716566136Z level=info msg="Executing migration" id="add index data_source.account_id" 2026-03-10T08:36:43.915 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.716969751Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=387.234µs 2026-03-10T08:36:43.915 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.717427338Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" 2026-03-10T08:36:43.915 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.717798793Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=371.324µs 2026-03-10T08:36:43.915 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.718238525Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" 2026-03-10T08:36:43.915 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.718602187Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=363.591µs 2026-03-10T08:36:43.915 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.719062418Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" 2026-03-10T08:36:43.915 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.719408687Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=346.238µs 2026-03-10T08:36:43.915 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.719862065Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" 2026-03-10T08:36:43.915 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.721743557Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=1.881572ms 2026-03-10T08:36:43.915 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.722277637Z level=info msg="Executing migration" id="create data_source table v2" 2026-03-10T08:36:43.915 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.722699816Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=422.19µs 2026-03-10T08:36:43.915 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.723205062Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" 2026-03-10T08:36:43.915 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.723589842Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=384.739µs 2026-03-10T08:36:43.915 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.724089718Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" 2026-03-10T08:36:43.915 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.724471343Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=381.443µs 2026-03-10T08:36:43.915 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.724990685Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" 2026-03-10T08:36:43.916 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.725272863Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=282.218µs 2026-03-10T08:36:43.916 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.725726652Z level=info msg="Executing migration" id="Add column with_credentials" 2026-03-10T08:36:43.916 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.726542549Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=815.737µs 2026-03-10T08:36:43.916 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.727000747Z level=info msg="Executing migration" id="Add secure json data column" 2026-03-10T08:36:43.916 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.72779856Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=797.863µs 2026-03-10T08:36:43.916 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.7282581Z level=info msg="Executing migration" id="Update data_source table charset" 2026-03-10T08:36:43.916 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.728290812Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=33.322µs 2026-03-10T08:36:43.916 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.728816324Z level=info msg="Executing migration" id="Update initial version to 1" 2026-03-10T08:36:43.916 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.728932403Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=116.468µs 2026-03-10T08:36:43.916 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.729421408Z level=info msg="Executing migration" id="Add read_only data column" 2026-03-10T08:36:43.916 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.730227066Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=805.548µs 2026-03-10T08:36:43.916 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.73075797Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" 2026-03-10T08:36:43.916 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.730874248Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=116.578µs 2026-03-10T08:36:43.916 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.731343666Z level=info msg="Executing migration" id="Update json_data with nulls" 2026-03-10T08:36:43.916 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.731449424Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=106.059µs 2026-03-10T08:36:43.916 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.731990176Z level=info msg="Executing migration" id="Add uid column" 2026-03-10T08:36:43.916 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.732812024Z level=info msg="Migration successfully executed" id="Add uid column" duration=822.059µs 2026-03-10T08:36:43.916 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.733263159Z level=info msg="Executing migration" id="Update uid value" 2026-03-10T08:36:43.916 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.733378615Z level=info msg="Migration successfully executed" id="Update uid value" duration=115.476µs 2026-03-10T08:36:43.916 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.733906092Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" 2026-03-10T08:36:43.916 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.734288507Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=382.206µs 2026-03-10T08:36:43.916 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.734755843Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" 2026-03-10T08:36:43.916 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.735121487Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=365.915µs 2026-03-10T08:36:43.916 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.735643123Z level=info msg="Executing migration" id="create api_key table" 2026-03-10T08:36:43.916 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.73602075Z level=info msg="Migration successfully executed" id="create api_key table" duration=377.657µs 2026-03-10T08:36:43.916 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.736520636Z level=info msg="Executing migration" id="add index api_key.account_id" 2026-03-10T08:36:43.916 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.736903172Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=382.526µs 2026-03-10T08:36:43.916 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.737384383Z level=info msg="Executing migration" id="add index api_key.key" 2026-03-10T08:36:43.916 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.737771798Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=385.942µs 2026-03-10T08:36:43.916 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.7382964Z level=info msg="Executing migration" id="add index api_key.account_id_name" 2026-03-10T08:36:43.916 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.739198087Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=900.654µs 2026-03-10T08:36:43.916 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.739910811Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" 2026-03-10T08:36:43.916 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.740399056Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=487.994µs 2026-03-10T08:36:43.916 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.740954565Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" 2026-03-10T08:36:43.917 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.741385703Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=431.428µs 2026-03-10T08:36:43.917 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.741855702Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" 2026-03-10T08:36:43.917 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.742214634Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=358.802µs 2026-03-10T08:36:43.917 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.742708548Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" 2026-03-10T08:36:43.917 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.74476648Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=2.057712ms 2026-03-10T08:36:43.917 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.745290902Z level=info msg="Executing migration" id="create api_key table v2" 2026-03-10T08:36:43.917 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.74570695Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=416.119µs 2026-03-10T08:36:43.917 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.746233837Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" 2026-03-10T08:36:43.917 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.746752196Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=518.329µs 2026-03-10T08:36:43.917 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.747327233Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" 2026-03-10T08:36:43.917 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.747728775Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=401.502µs 2026-03-10T08:36:43.917 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.748206419Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" 2026-03-10T08:36:43.917 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.748581881Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=375.413µs 2026-03-10T08:36:43.917 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.749073773Z level=info msg="Executing migration" id="copy api_key v1 to v2" 2026-03-10T08:36:43.917 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.74929181Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=218.108µs 2026-03-10T08:36:43.917 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.749745851Z level=info msg="Executing migration" id="Drop old table api_key_v1" 2026-03-10T08:36:43.917 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.750042005Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=296.214µs 2026-03-10T08:36:43.917 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.75053127Z level=info msg="Executing migration" id="Update api_key table charset" 2026-03-10T08:36:43.917 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.750565264Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=34.284µs 2026-03-10T08:36:43.917 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.751061423Z level=info msg="Executing migration" id="Add expires to api_key table" 2026-03-10T08:36:43.917 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.751957039Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=894.615µs 2026-03-10T08:36:43.917 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.752413293Z level=info msg="Executing migration" id="Add service account foreign key" 2026-03-10T08:36:43.917 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.753294362Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=881.1µs 2026-03-10T08:36:43.917 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.753837489Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" 2026-03-10T08:36:43.917 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.75394461Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=107.361µs 2026-03-10T08:36:43.917 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.754487506Z level=info msg="Executing migration" id="Add last_used_at to api_key table" 2026-03-10T08:36:43.917 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.755393592Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=906.036µs 2026-03-10T08:36:43.917 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.755844165Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" 2026-03-10T08:36:43.917 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.756715165Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=870.98µs 2026-03-10T08:36:43.917 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.757198119Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" 2026-03-10T08:36:43.917 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.757567581Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=369.452µs 2026-03-10T08:36:43.917 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.758048691Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" 2026-03-10T08:36:43.917 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.758339947Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=291.286µs 2026-03-10T08:36:43.917 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.758804796Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" 2026-03-10T08:36:43.917 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.759218411Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=413.544µs 2026-03-10T08:36:43.917 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.759703639Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" 2026-03-10T08:36:43.917 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.760091164Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=387.575µs 2026-03-10T08:36:43.917 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.760568498Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" 2026-03-10T08:36:43.917 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.760983585Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=415.077µs 2026-03-10T08:36:43.917 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.761489822Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" 2026-03-10T08:36:43.917 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.761874842Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=385.291µs 2026-03-10T08:36:43.917 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.762360913Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" 2026-03-10T08:36:43.917 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.762410636Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=50.154µs 2026-03-10T08:36:43.918 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.762927352Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" 2026-03-10T08:36:43.918 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.762960606Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=33.734µs 2026-03-10T08:36:43.918 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.763462625Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" 2026-03-10T08:36:43.918 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.764418314Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=956.701µs 2026-03-10T08:36:43.918 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.764874557Z level=info msg="Executing migration" id="Add encrypted dashboard json column" 2026-03-10T08:36:43.918 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.765828442Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=953.706µs 2026-03-10T08:36:43.918 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.766297371Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" 2026-03-10T08:36:43.918 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.766390004Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=92.924µs 2026-03-10T08:36:43.918 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.766920608Z level=info msg="Executing migration" id="create quota table v1" 2026-03-10T08:36:43.918 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.767255434Z level=info msg="Migration successfully executed" id="create quota table v1" duration=334.616µs 2026-03-10T08:36:43.918 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.767700206Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" 2026-03-10T08:36:43.918 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.768066443Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=365.885µs 2026-03-10T08:36:43.918 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.768571657Z level=info msg="Executing migration" id="Update quota table charset" 2026-03-10T08:36:43.918 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.768604559Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=33.413µs 2026-03-10T08:36:43.918 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.76910201Z level=info msg="Executing migration" id="create plugin_setting table" 2026-03-10T08:36:43.918 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.769469288Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=367.207µs 2026-03-10T08:36:43.918 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.769938366Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" 2026-03-10T08:36:43.918 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.770315752Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=377.466µs 2026-03-10T08:36:43.918 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.770888645Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" 2026-03-10T08:36:43.918 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.771853371Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=964.706µs 2026-03-10T08:36:43.918 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.772336034Z level=info msg="Executing migration" id="Update plugin_setting table charset" 2026-03-10T08:36:43.918 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.772368825Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=33.322µs 2026-03-10T08:36:43.918 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.772924064Z level=info msg="Executing migration" id="create session table" 2026-03-10T08:36:43.918 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.773335995Z level=info msg="Migration successfully executed" id="create session table" duration=410.638µs 2026-03-10T08:36:43.918 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.773882057Z level=info msg="Executing migration" id="Drop old table playlist table" 2026-03-10T08:36:43.918 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.773951748Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=70.142µs 2026-03-10T08:36:43.918 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.774467905Z level=info msg="Executing migration" id="Drop old table playlist_item table" 2026-03-10T08:36:43.918 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.774528307Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=60.884µs 2026-03-10T08:36:43.918 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.775040666Z level=info msg="Executing migration" id="create playlist table v2" 2026-03-10T08:36:43.918 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.775383588Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=343.985µs 2026-03-10T08:36:43.918 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.775875279Z level=info msg="Executing migration" id="create playlist item table v2" 2026-03-10T08:36:43.918 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.776213351Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=338.093µs 2026-03-10T08:36:43.918 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.776708428Z level=info msg="Executing migration" id="Update playlist table charset" 2026-03-10T08:36:43.918 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.776741921Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=33.772µs 2026-03-10T08:36:43.918 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.77731863Z level=info msg="Executing migration" id="Update playlist_item table charset" 2026-03-10T08:36:43.918 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.777351873Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=33.833µs 2026-03-10T08:36:43.918 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.777744518Z level=info msg="Executing migration" id="Add playlist column created_at" 2026-03-10T08:36:43.918 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.778877447Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=1.132299ms 2026-03-10T08:36:43.919 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.779382674Z level=info msg="Executing migration" id="Add playlist column updated_at" 2026-03-10T08:36:43.919 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.780464699Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=1.080943ms 2026-03-10T08:36:43.919 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.78125061Z level=info msg="Executing migration" id="drop preferences table v2" 2026-03-10T08:36:43.919 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.781328095Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=73.036µs 2026-03-10T08:36:43.919 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.781925654Z level=info msg="Executing migration" id="drop preferences table v3" 2026-03-10T08:36:43.919 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.782686968Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=757.838µs 2026-03-10T08:36:43.919 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.784186926Z level=info msg="Executing migration" id="create preferences table v3" 2026-03-10T08:36:43.919 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.784766931Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=578.262µs 2026-03-10T08:36:43.919 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.785525431Z level=info msg="Executing migration" id="Update preferences table charset" 2026-03-10T08:36:43.919 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.785605551Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=82.044µs 2026-03-10T08:36:43.919 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.78814273Z level=info msg="Executing migration" id="Add column team_id in preferences" 2026-03-10T08:36:43.919 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.789434648Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=1.291026ms 2026-03-10T08:36:43.919 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.7899536Z level=info msg="Executing migration" id="Update team_id column values in preferences" 2026-03-10T08:36:43.919 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.790049239Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=96.05µs 2026-03-10T08:36:43.919 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.790611622Z level=info msg="Executing migration" id="Add column week_start in preferences" 2026-03-10T08:36:43.919 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.791744402Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=1.13265ms 2026-03-10T08:36:43.919 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.792276157Z level=info msg="Executing migration" id="Add column preferences.json_data" 2026-03-10T08:36:43.919 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.793311003Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=1.034847ms 2026-03-10T08:36:43.919 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.793831028Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" 2026-03-10T08:36:43.919 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.793911518Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=80.731µs 2026-03-10T08:36:43.919 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.794460206Z level=info msg="Executing migration" id="Add preferences index org_id" 2026-03-10T08:36:43.919 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.794893466Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=433.041µs 2026-03-10T08:36:43.919 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.795462201Z level=info msg="Executing migration" id="Add preferences index user_id" 2026-03-10T08:36:43.919 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.795871557Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=409.606µs 2026-03-10T08:36:43.919 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.796408001Z level=info msg="Executing migration" id="create alert table v1" 2026-03-10T08:36:43.919 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.796908177Z level=info msg="Migration successfully executed" id="create alert table v1" duration=499.915µs 2026-03-10T08:36:43.919 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.797425155Z level=info msg="Executing migration" id="add index alert org_id & id " 2026-03-10T08:36:43.919 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.797871961Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=446.406µs 2026-03-10T08:36:43.919 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.798434394Z level=info msg="Executing migration" id="add index alert state" 2026-03-10T08:36:43.919 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.798830425Z level=info msg="Migration successfully executed" id="add index alert state" duration=395.8µs 2026-03-10T08:36:43.919 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.799299003Z level=info msg="Executing migration" id="add index alert dashboard_id" 2026-03-10T08:36:43.919 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.799696977Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=397.954µs 2026-03-10T08:36:43.919 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.800190802Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" 2026-03-10T08:36:43.920 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.800518164Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=327.382µs 2026-03-10T08:36:43.920 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.801043508Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" 2026-03-10T08:36:43.920 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.801458545Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=415.127µs 2026-03-10T08:36:43.920 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.80193674Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" 2026-03-10T08:36:43.920 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.802322221Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=385.431µs 2026-03-10T08:36:43.920 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.802830172Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" 2026-03-10T08:36:43.920 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.8058583Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=3.027799ms 2026-03-10T08:36:43.920 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.806383853Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" 2026-03-10T08:36:43.920 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.806736664Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=352.841µs 2026-03-10T08:36:43.920 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.807224907Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" 2026-03-10T08:36:43.920 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.807607042Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=382.435µs 2026-03-10T08:36:43.920 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.808183341Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" 2026-03-10T08:36:43.920 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.808359771Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=176.29µs 2026-03-10T08:36:43.920 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.808815325Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" 2026-03-10T08:36:43.920 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.809088275Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=272.961µs 2026-03-10T08:36:43.920 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.809597449Z level=info msg="Executing migration" id="create alert_notification table v1" 2026-03-10T08:36:43.920 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.809954957Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=357.028µs 2026-03-10T08:36:43.920 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.810457078Z level=info msg="Executing migration" id="Add column is_default" 2026-03-10T08:36:43.920 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.811587914Z level=info msg="Migration successfully executed" id="Add column is_default" duration=1.130826ms 2026-03-10T08:36:43.920 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.812125551Z level=info msg="Executing migration" id="Add column frequency" 2026-03-10T08:36:43.920 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.813337079Z level=info msg="Migration successfully executed" id="Add column frequency" duration=1.210295ms 2026-03-10T08:36:43.920 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.813836483Z level=info msg="Executing migration" id="Add column send_reminder" 2026-03-10T08:36:43.920 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.815220725Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=1.383129ms 2026-03-10T08:36:43.920 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.815815978Z level=info msg="Executing migration" id="Add column disable_resolve_message" 2026-03-10T08:36:43.920 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.816988883Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=1.173085ms 2026-03-10T08:36:43.920 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.817482708Z level=info msg="Executing migration" id="add index alert_notification org_id & name" 2026-03-10T08:36:43.920 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.81786846Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=385.712µs 2026-03-10T08:36:43.920 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.818524228Z level=info msg="Executing migration" id="Update alert table charset" 2026-03-10T08:36:43.920 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.818609146Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=87.864µs 2026-03-10T08:36:43.920 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.819361765Z level=info msg="Executing migration" id="Update alert_notification table charset" 2026-03-10T08:36:43.920 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.819400016Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=38.933µs 2026-03-10T08:36:43.920 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.819795276Z level=info msg="Executing migration" id="create notification_journal table v1" 2026-03-10T08:36:43.920 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.820230882Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=436.868µs 2026-03-10T08:36:43.920 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.820754141Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" 2026-03-10T08:36:43.920 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.821131728Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=379.35µs 2026-03-10T08:36:43.920 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.821617006Z level=info msg="Executing migration" id="drop alert_notification_journal" 2026-03-10T08:36:43.920 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.822015051Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=398.045µs 2026-03-10T08:36:43.920 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.822770646Z level=info msg="Executing migration" id="create alert_notification_state table v1" 2026-03-10T08:36:43.920 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.823234304Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=462.476µs 2026-03-10T08:36:43.920 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.823921731Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" 2026-03-10T08:36:43.920 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.824423118Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=501.297µs 2026-03-10T08:36:43.921 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.824955135Z level=info msg="Executing migration" id="Add for to alert table" 2026-03-10T08:36:43.921 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.82616535Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=1.208792ms 2026-03-10T08:36:43.921 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.826793226Z level=info msg="Executing migration" id="Add column uid in alert_notification" 2026-03-10T08:36:43.921 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.828033256Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=1.239901ms 2026-03-10T08:36:43.921 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.828493538Z level=info msg="Executing migration" id="Update uid column values in alert_notification" 2026-03-10T08:36:43.921 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.828601119Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=106.529µs 2026-03-10T08:36:43.921 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.829094622Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" 2026-03-10T08:36:43.921 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.829465597Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=370.755µs 2026-03-10T08:36:43.921 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.829963609Z level=info msg="Executing migration" id="Remove unique index org_id_name" 2026-03-10T08:36:43.921 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.830349151Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=385.542µs 2026-03-10T08:36:43.921 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.830827125Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" 2026-03-10T08:36:43.921 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.832000311Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=1.173075ms 2026-03-10T08:36:43.921 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.832494947Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" 2026-03-10T08:36:43.921 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.832545092Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=50.755µs 2026-03-10T08:36:43.921 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.833101181Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" 2026-03-10T08:36:43.921 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.833487796Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=387.435µs 2026-03-10T08:36:43.921 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.833978454Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" 2026-03-10T08:36:43.921 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.834420932Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=442.479µs 2026-03-10T08:36:43.921 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.834967294Z level=info msg="Executing migration" id="Drop old annotation table v4" 2026-03-10T08:36:43.921 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.835036965Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=69.571µs 2026-03-10T08:36:43.921 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.835590752Z level=info msg="Executing migration" id="create annotation table v5" 2026-03-10T08:36:43.921 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.836003554Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=412.632µs 2026-03-10T08:36:43.921 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.836502379Z level=info msg="Executing migration" id="add index annotation 0 v3" 2026-03-10T08:36:43.921 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.836898019Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=395.449µs 2026-03-10T08:36:43.921 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.837396773Z level=info msg="Executing migration" id="add index annotation 1 v3" 2026-03-10T08:36:43.921 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.837790229Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=392.694µs 2026-03-10T08:36:43.921 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.838341731Z level=info msg="Executing migration" id="add index annotation 2 v3" 2026-03-10T08:36:43.921 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.838738533Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=396.09µs 2026-03-10T08:36:43.921 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.839248499Z level=info msg="Executing migration" id="add index annotation 3 v3" 2026-03-10T08:36:43.921 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.839686719Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=437.969µs 2026-03-10T08:36:43.921 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.840228373Z level=info msg="Executing migration" id="add index annotation 4 v3" 2026-03-10T08:36:43.921 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.840705345Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=477.794µs 2026-03-10T08:36:43.921 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.841303144Z level=info msg="Executing migration" id="Update annotation table charset" 2026-03-10T08:36:43.921 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.84133852Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=34.945µs 2026-03-10T08:36:43.921 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.841860798Z level=info msg="Executing migration" id="Add column region_id to annotation table" 2026-03-10T08:36:43.921 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.843260528Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=1.399679ms 2026-03-10T08:36:43.921 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.843831306Z level=info msg="Executing migration" id="Drop category_id index" 2026-03-10T08:36:43.921 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.84422362Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=392.044µs 2026-03-10T08:36:43.921 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.844740759Z level=info msg="Executing migration" id="Add column tags to annotation table" 2026-03-10T08:36:43.921 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.846020775Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=1.279554ms 2026-03-10T08:36:43.922 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.846523817Z level=info msg="Executing migration" id="Create annotation_tag table v2" 2026-03-10T08:36:43.922 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.846878059Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=353.392µs 2026-03-10T08:36:43.922 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.84737596Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" 2026-03-10T08:36:43.922 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.847787151Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=411.16µs 2026-03-10T08:36:43.922 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.848317032Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" 2026-03-10T08:36:43.922 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.848726539Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=408.706µs 2026-03-10T08:36:43.922 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.849240421Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" 2026-03-10T08:36:43.922 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.852621199Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=3.380537ms 2026-03-10T08:36:43.922 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.853152444Z level=info msg="Executing migration" id="Create annotation_tag table v3" 2026-03-10T08:36:43.922 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.85350331Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=350.806µs 2026-03-10T08:36:43.922 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.854019566Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" 2026-03-10T08:36:43.922 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.854417301Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=397.565µs 2026-03-10T08:36:43.922 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.854953504Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" 2026-03-10T08:36:43.922 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.85512682Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=173.305µs 2026-03-10T08:36:43.922 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.855664655Z level=info msg="Executing migration" id="drop table annotation_tag_v2" 2026-03-10T08:36:43.922 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.855953808Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=289.052µs 2026-03-10T08:36:43.922 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.856467408Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" 2026-03-10T08:36:43.922 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.856567986Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=100.688µs 2026-03-10T08:36:43.922 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.857103028Z level=info msg="Executing migration" id="Add created time to annotation table" 2026-03-10T08:36:43.922 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.85839198Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=1.288591ms 2026-03-10T08:36:43.922 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.858906053Z level=info msg="Executing migration" id="Add updated time to annotation table" 2026-03-10T08:36:43.922 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.860162554Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=1.25605ms 2026-03-10T08:36:43.922 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.860655888Z level=info msg="Executing migration" id="Add index for created in annotation table" 2026-03-10T08:36:43.922 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.861039446Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=383.328µs 2026-03-10T08:36:43.922 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.861485902Z level=info msg="Executing migration" id="Add index for updated in annotation table" 2026-03-10T08:36:43.922 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.861878076Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=391.914µs 2026-03-10T08:36:43.922 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.862341132Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" 2026-03-10T08:36:43.922 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.86246847Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=127.698µs 2026-03-10T08:36:43.922 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.86299155Z level=info msg="Executing migration" id="Add epoch_end column" 2026-03-10T08:36:43.922 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.864262078Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=1.270067ms 2026-03-10T08:36:43.922 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.864731166Z level=info msg="Executing migration" id="Add index for epoch_end" 2026-03-10T08:36:43.922 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.865117098Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=385.001µs 2026-03-10T08:36:43.922 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.865546272Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" 2026-03-10T08:36:43.922 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.865661267Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=114.845µs 2026-03-10T08:36:43.922 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.866147447Z level=info msg="Executing migration" id="Move region to single row" 2026-03-10T08:36:43.922 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.866320972Z level=info msg="Migration successfully executed" id="Move region to single row" duration=173.534µs 2026-03-10T08:36:43.922 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.866701434Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" 2026-03-10T08:36:43.922 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.867089239Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=387.575µs 2026-03-10T08:36:43.922 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.867537839Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" 2026-03-10T08:36:43.922 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.867958366Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=420.426µs 2026-03-10T08:36:43.922 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.868469964Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" 2026-03-10T08:36:43.922 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.868873599Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=403.464µs 2026-03-10T08:36:43.922 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.869307422Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" 2026-03-10T08:36:43.922 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.869723711Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=415.587µs 2026-03-10T08:36:43.922 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.870158935Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" 2026-03-10T08:36:43.922 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.870557952Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=398.597µs 2026-03-10T08:36:43.922 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.871031097Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" 2026-03-10T08:36:43.923 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.871407182Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=376.045µs 2026-03-10T08:36:43.923 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.871859849Z level=info msg="Executing migration" id="Increase tags column to length 4096" 2026-03-10T08:36:43.923 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.871909141Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=49.803µs 2026-03-10T08:36:43.923 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.872393398Z level=info msg="Executing migration" id="create test_data table" 2026-03-10T08:36:43.923 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.872769872Z level=info msg="Migration successfully executed" id="create test_data table" duration=376.374µs 2026-03-10T08:36:43.923 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.873250182Z level=info msg="Executing migration" id="create dashboard_version table v1" 2026-03-10T08:36:43.923 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.873606178Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=356.086µs 2026-03-10T08:36:43.923 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.87417352Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" 2026-03-10T08:36:43.923 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.874554663Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=381.104µs 2026-03-10T08:36:43.923 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.875020316Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" 2026-03-10T08:36:43.923 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.875420594Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=400.139µs 2026-03-10T08:36:43.923 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.875923676Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" 2026-03-10T08:36:43.923 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.876031357Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=107.651µs 2026-03-10T08:36:43.923 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.876534859Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" 2026-03-10T08:36:43.923 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.876742739Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=208.01µs 2026-03-10T08:36:43.923 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.877169378Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" 2026-03-10T08:36:43.923 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.877218139Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=49.283µs 2026-03-10T08:36:43.923 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.877606756Z level=info msg="Executing migration" id="create team table" 2026-03-10T08:36:43.923 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.877957132Z level=info msg="Migration successfully executed" id="create team table" duration=350.296µs 2026-03-10T08:36:43.923 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.878398097Z level=info msg="Executing migration" id="add index team.org_id" 2026-03-10T08:36:43.923 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.878846758Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=448.551µs 2026-03-10T08:36:43.923 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.879344429Z level=info msg="Executing migration" id="add unique index team_org_id_name" 2026-03-10T08:36:43.923 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.879756741Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=412.152µs 2026-03-10T08:36:43.923 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.880202415Z level=info msg="Executing migration" id="Add column uid in team" 2026-03-10T08:36:43.923 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.881552643Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=1.350108ms 2026-03-10T08:36:43.923 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.882233055Z level=info msg="Executing migration" id="Update uid column values in team" 2026-03-10T08:36:43.923 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.882344914Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=112.049µs 2026-03-10T08:36:43.923 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.882868976Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" 2026-03-10T08:36:43.923 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.883269886Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=400.58µs 2026-03-10T08:36:43.923 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.883712555Z level=info msg="Executing migration" id="create team member table" 2026-03-10T08:36:43.923 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.884058432Z level=info msg="Migration successfully executed" id="create team member table" duration=345.837µs 2026-03-10T08:36:43.923 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.884540515Z level=info msg="Executing migration" id="add index team_member.org_id" 2026-03-10T08:36:43.923 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.884932478Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=391.873µs 2026-03-10T08:36:43.923 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.88542482Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" 2026-03-10T08:36:43.923 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.885818827Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=394.689µs 2026-03-10T08:36:43.923 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.886253711Z level=info msg="Executing migration" id="add index team_member.team_id" 2026-03-10T08:36:43.923 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.886703353Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=449.572µs 2026-03-10T08:36:43.923 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.887195564Z level=info msg="Executing migration" id="Add column email to team table" 2026-03-10T08:36:43.923 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.88871604Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=1.520546ms 2026-03-10T08:36:43.923 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.889212399Z level=info msg="Executing migration" id="Add column external to team_member table" 2026-03-10T08:36:43.923 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.890755788Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=1.543379ms 2026-03-10T08:36:43.923 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.891239474Z level=info msg="Executing migration" id="Add column permission to team_member table" 2026-03-10T08:36:43.923 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.892664861Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=1.425607ms 2026-03-10T08:36:43.923 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.893113832Z level=info msg="Executing migration" id="create dashboard acl table" 2026-03-10T08:36:43.923 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.893537765Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=423.973µs 2026-03-10T08:36:43.923 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.894059863Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" 2026-03-10T08:36:43.924 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.894475751Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=415.987µs 2026-03-10T08:36:43.924 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.89494528Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" 2026-03-10T08:36:43.924 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.895397806Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=451.866µs 2026-03-10T08:36:43.924 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.895937528Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" 2026-03-10T08:36:43.924 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.896340461Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=401.712µs 2026-03-10T08:36:43.924 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.896792267Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" 2026-03-10T08:36:43.924 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.897170756Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=377.707µs 2026-03-10T08:36:43.924 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.897613473Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" 2026-03-10T08:36:43.924 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.898020647Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=407.063µs 2026-03-10T08:36:43.924 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.898499342Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" 2026-03-10T08:36:43.924 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.898910682Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=411.07µs 2026-03-10T08:36:43.924 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.899356616Z level=info msg="Executing migration" id="add index dashboard_permission" 2026-03-10T08:36:43.924 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.899784989Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=428.483µs 2026-03-10T08:36:43.924 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.90021362Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" 2026-03-10T08:36:43.924 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.900473867Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=260.236µs 2026-03-10T08:36:43.924 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.901171694Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" 2026-03-10T08:36:43.924 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.90129787Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=126.327µs 2026-03-10T08:36:43.924 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.901777639Z level=info msg="Executing migration" id="create tag table" 2026-03-10T08:36:43.924 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.90212036Z level=info msg="Migration successfully executed" id="create tag table" duration=342.67µs 2026-03-10T08:36:43.924 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.902635795Z level=info msg="Executing migration" id="add index tag.key_value" 2026-03-10T08:36:43.924 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.903026325Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=390.28µs 2026-03-10T08:36:43.924 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.903486057Z level=info msg="Executing migration" id="create login attempt table" 2026-03-10T08:36:43.924 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.90382466Z level=info msg="Migration successfully executed" id="create login attempt table" duration=338.483µs 2026-03-10T08:36:43.924 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.904278288Z level=info msg="Executing migration" id="add index login_attempt.username" 2026-03-10T08:36:43.924 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.904675252Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=397.044µs 2026-03-10T08:36:43.924 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.905107311Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" 2026-03-10T08:36:43.924 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.905502881Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=395.471µs 2026-03-10T08:36:43.924 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.905966629Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" 2026-03-10T08:36:43.924 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.910005108Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=4.038239ms 2026-03-10T08:36:43.924 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.910662629Z level=info msg="Executing migration" id="create login_attempt v2" 2026-03-10T08:36:43.924 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.910998337Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=335.789µs 2026-03-10T08:36:43.924 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.912643517Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" 2026-03-10T08:36:43.924 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.913267895Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=645.067µs 2026-03-10T08:36:43.924 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.913822693Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" 2026-03-10T08:36:43.924 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.913992422Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=169.899µs 2026-03-10T08:36:43.924 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.914597904Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" 2026-03-10T08:36:43.924 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.91506548Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=467.756µs 2026-03-10T08:36:43.924 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.915588759Z level=info msg="Executing migration" id="create user auth table" 2026-03-10T08:36:43.924 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.915956348Z level=info msg="Migration successfully executed" id="create user auth table" duration=367.589µs 2026-03-10T08:36:43.924 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.916465961Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" 2026-03-10T08:36:43.924 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.916898261Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=430.807µs 2026-03-10T08:36:43.924 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.917397215Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" 2026-03-10T08:36:43.924 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.917443931Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=46.928µs 2026-03-10T08:36:43.924 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.91804157Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" 2026-03-10T08:36:43.924 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.91978842Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=1.74668ms 2026-03-10T08:36:43.926 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:43 vm03 ceph-mon[57160]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:36:43.926 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:43 vm03 ceph-mon[57160]: pgmap v9: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T08:36:43.926 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:43 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:43.927 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:43 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:43.927 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:43 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:43.927 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:43 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:43.927 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:43 vm03 ceph-mon[50703]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:36:43.927 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:43 vm03 ceph-mon[50703]: pgmap v9: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T08:36:43.927 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:43 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:43.927 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:43 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:43.927 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:43 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:43.927 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:43 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:44.171 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.9203064Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" 2026-03-10T08:36:44.171 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.921853946Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=1.548288ms 2026-03-10T08:36:44.171 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.922345997Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" 2026-03-10T08:36:44.171 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.923842568Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=1.496421ms 2026-03-10T08:36:44.171 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.924365868Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" 2026-03-10T08:36:44.171 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.925886454Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=1.519383ms 2026-03-10T08:36:44.171 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.926404194Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" 2026-03-10T08:36:44.171 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.926798271Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=394.277µs 2026-03-10T08:36:44.171 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.927311963Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" 2026-03-10T08:36:44.171 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.928834983Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=1.52293ms 2026-03-10T08:36:44.171 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.929352272Z level=info msg="Executing migration" id="create server_lock table" 2026-03-10T08:36:44.171 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.929724217Z level=info msg="Migration successfully executed" id="create server_lock table" duration=372.026µs 2026-03-10T08:36:44.171 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.930239813Z level=info msg="Executing migration" id="add index server_lock.operation_uid" 2026-03-10T08:36:44.171 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.930620856Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=380.922µs 2026-03-10T08:36:44.171 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.931158072Z level=info msg="Executing migration" id="create user auth token table" 2026-03-10T08:36:44.171 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.931569903Z level=info msg="Migration successfully executed" id="create user auth token table" duration=411.81µs 2026-03-10T08:36:44.171 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.932114382Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" 2026-03-10T08:36:44.171 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.932507427Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=394.287µs 2026-03-10T08:36:44.171 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.932995491Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" 2026-03-10T08:36:44.171 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.933386222Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=390.731µs 2026-03-10T08:36:44.171 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.933865309Z level=info msg="Executing migration" id="add index user_auth_token.user_id" 2026-03-10T08:36:44.171 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.934286597Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=421.148µs 2026-03-10T08:36:44.171 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.934826488Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" 2026-03-10T08:36:44.171 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.936408449Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=1.582121ms 2026-03-10T08:36:44.171 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.936965843Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" 2026-03-10T08:36:44.171 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.93738136Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=415.477µs 2026-03-10T08:36:44.171 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.937875445Z level=info msg="Executing migration" id="create cache_data table" 2026-03-10T08:36:44.171 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.938245337Z level=info msg="Migration successfully executed" id="create cache_data table" duration=369.943µs 2026-03-10T08:36:44.171 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.938731388Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" 2026-03-10T08:36:44.171 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.939116918Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=386.583µs 2026-03-10T08:36:44.171 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.939576178Z level=info msg="Executing migration" id="create short_url table v1" 2026-03-10T08:36:44.171 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.939992297Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=416.539µs 2026-03-10T08:36:44.171 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.940525105Z level=info msg="Executing migration" id="add index short_url.org_id-uid" 2026-03-10T08:36:44.171 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.940949379Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=424.243µs 2026-03-10T08:36:44.171 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.941442712Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" 2026-03-10T08:36:44.172 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.941490201Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=47.939µs 2026-03-10T08:36:44.172 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.942039319Z level=info msg="Executing migration" id="delete alert_definition table" 2026-03-10T08:36:44.172 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.942100763Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=61.825µs 2026-03-10T08:36:44.172 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.942609235Z level=info msg="Executing migration" id="recreate alert_definition table" 2026-03-10T08:36:44.172 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.942999696Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=389.168µs 2026-03-10T08:36:44.172 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.943488922Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" 2026-03-10T08:36:44.172 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.943927874Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=439.111µs 2026-03-10T08:36:44.172 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.94442757Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" 2026-03-10T08:36:44.172 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.94487667Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=449µs 2026-03-10T08:36:44.172 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.945402644Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" 2026-03-10T08:36:44.172 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.945447539Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=45.265µs 2026-03-10T08:36:44.172 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.945973252Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" 2026-03-10T08:36:44.172 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.946368312Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=395.05µs 2026-03-10T08:36:44.172 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.946893274Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" 2026-03-10T08:36:44.172 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.947287302Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=394.019µs 2026-03-10T08:36:44.172 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.947789431Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" 2026-03-10T08:36:44.172 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.948192426Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=402.986µs 2026-03-10T08:36:44.172 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.948696449Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" 2026-03-10T08:36:44.172 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.949130782Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=434.103µs 2026-03-10T08:36:44.172 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.949612243Z level=info msg="Executing migration" id="Add column paused in alert_definition" 2026-03-10T08:36:44.172 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.951385112Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=1.772798ms 2026-03-10T08:36:44.172 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.951901098Z level=info msg="Executing migration" id="drop alert_definition table" 2026-03-10T08:36:44.172 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.952331593Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=430.676µs 2026-03-10T08:36:44.172 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.952849253Z level=info msg="Executing migration" id="delete alert_definition_version table" 2026-03-10T08:36:44.172 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.952910157Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=61.135µs 2026-03-10T08:36:44.172 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.953393461Z level=info msg="Executing migration" id="recreate alert_definition_version table" 2026-03-10T08:36:44.172 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.953787109Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=393.527µs 2026-03-10T08:36:44.172 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.954275351Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" 2026-03-10T08:36:44.172 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.954701029Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=424.595µs 2026-03-10T08:36:44.172 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.955176789Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" 2026-03-10T08:36:44.172 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.955606724Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=429.825µs 2026-03-10T08:36:44.172 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.956124412Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" 2026-03-10T08:36:44.172 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.95617071Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=46.757µs 2026-03-10T08:36:44.172 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.956715069Z level=info msg="Executing migration" id="drop alert_definition_version table" 2026-03-10T08:36:44.172 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.957124555Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=409.466µs 2026-03-10T08:36:44.172 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.957658976Z level=info msg="Executing migration" id="create alert_instance table" 2026-03-10T08:36:44.172 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.958044717Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=384.339µs 2026-03-10T08:36:44.172 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.95853793Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" 2026-03-10T08:36:44.172 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.958974247Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=436.076µs 2026-03-10T08:36:44.172 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.959469414Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" 2026-03-10T08:36:44.172 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.959873951Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=404.688µs 2026-03-10T08:36:44.172 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.960351094Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" 2026-03-10T08:36:44.172 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.962131627Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=1.780422ms 2026-03-10T08:36:44.172 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.962653634Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" 2026-03-10T08:36:44.172 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.963029397Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=374.821µs 2026-03-10T08:36:44.172 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.963485912Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" 2026-03-10T08:36:44.172 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.96390188Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=414.887µs 2026-03-10T08:36:44.172 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.964413498Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" 2026-03-10T08:36:44.172 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.972924899Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=8.518905ms 2026-03-10T08:36:44.172 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.973551883Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" 2026-03-10T08:36:44.172 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.981167006Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=7.612759ms 2026-03-10T08:36:44.172 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.981868599Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" 2026-03-10T08:36:44.173 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.982322861Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=454.711µs 2026-03-10T08:36:44.173 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.982820521Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" 2026-03-10T08:36:44.173 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.983213197Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=394.278µs 2026-03-10T08:36:44.173 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.983693165Z level=info msg="Executing migration" id="add current_reason column related to current_state" 2026-03-10T08:36:44.173 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.985386836Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=1.69339ms 2026-03-10T08:36:44.173 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.98585474Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" 2026-03-10T08:36:44.173 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.987500972Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=1.646142ms 2026-03-10T08:36:44.173 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.988021016Z level=info msg="Executing migration" id="create alert_rule table" 2026-03-10T08:36:44.173 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.988471599Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=450.633µs 2026-03-10T08:36:44.173 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.988947209Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" 2026-03-10T08:36:44.173 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.989392773Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=444.542µs 2026-03-10T08:36:44.173 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.989847214Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" 2026-03-10T08:36:44.173 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.99025088Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=403.375µs 2026-03-10T08:36:44.173 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.99070022Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" 2026-03-10T08:36:44.173 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.991146897Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=446.467µs 2026-03-10T08:36:44.173 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.991692939Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" 2026-03-10T08:36:44.173 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.991741811Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=48.901µs 2026-03-10T08:36:44.173 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.992211359Z level=info msg="Executing migration" id="add column for to alert_rule" 2026-03-10T08:36:44.173 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.994052245Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=1.840906ms 2026-03-10T08:36:44.173 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.994586124Z level=info msg="Executing migration" id="add column annotations to alert_rule" 2026-03-10T08:36:44.173 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.99630411Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=1.717844ms 2026-03-10T08:36:44.173 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.996866423Z level=info msg="Executing migration" id="add column labels to alert_rule" 2026-03-10T08:36:44.173 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.998560233Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=1.6934ms 2026-03-10T08:36:44.173 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.999115882Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" 2026-03-10T08:36:44.173 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:43 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:43.999537812Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=421.99µs 2026-03-10T08:36:44.173 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.000065851Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" 2026-03-10T08:36:44.173 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.000498581Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=433.702µs 2026-03-10T08:36:44.173 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.001002764Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" 2026-03-10T08:36:44.173 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.002844472Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=1.841857ms 2026-03-10T08:36:44.173 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.003372601Z level=info msg="Executing migration" id="add panel_id column to alert_rule" 2026-03-10T08:36:44.173 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.005102218Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=1.729368ms 2026-03-10T08:36:44.173 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.005575052Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" 2026-03-10T08:36:44.173 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.006024554Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=447.929µs 2026-03-10T08:36:44.173 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.006517166Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" 2026-03-10T08:36:44.173 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.008274235Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=1.756789ms 2026-03-10T08:36:44.173 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.008738063Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" 2026-03-10T08:36:44.173 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.010445609Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=1.708628ms 2026-03-10T08:36:44.173 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.010998264Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" 2026-03-10T08:36:44.173 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.011051022Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=51.817µs 2026-03-10T08:36:44.173 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.011582498Z level=info msg="Executing migration" id="create alert_rule_version table" 2026-03-10T08:36:44.173 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.012111708Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=529.17µs 2026-03-10T08:36:44.173 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.012666757Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" 2026-03-10T08:36:44.173 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.013139672Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=471.021µs 2026-03-10T08:36:44.173 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.01360322Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" 2026-03-10T08:36:44.173 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.014062679Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=459.51µs 2026-03-10T08:36:44.173 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.014542268Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" 2026-03-10T08:36:44.173 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.014589657Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=47.739µs 2026-03-10T08:36:44.173 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.015077469Z level=info msg="Executing migration" id="add column for to alert_rule_version" 2026-03-10T08:36:44.173 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.016989668Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=1.911638ms 2026-03-10T08:36:44.173 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.017488923Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" 2026-03-10T08:36:44.173 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.019399059Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=1.908141ms 2026-03-10T08:36:44.173 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.01984373Z level=info msg="Executing migration" id="add column labels to alert_rule_version" 2026-03-10T08:36:44.174 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.021586602Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=1.743043ms 2026-03-10T08:36:44.174 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.022057234Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" 2026-03-10T08:36:44.174 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.023832817Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=1.775422ms 2026-03-10T08:36:44.174 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.0242828Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" 2026-03-10T08:36:44.174 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.026038045Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=1.754965ms 2026-03-10T08:36:44.174 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.026503966Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" 2026-03-10T08:36:44.174 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.026551866Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=48.351µs 2026-03-10T08:36:44.174 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.027097918Z level=info msg="Executing migration" id=create_alert_configuration_table 2026-03-10T08:36:44.174 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.027460797Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=362.919µs 2026-03-10T08:36:44.174 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.027976694Z level=info msg="Executing migration" id="Add column default in alert_configuration" 2026-03-10T08:36:44.174 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.02980714Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=1.831979ms 2026-03-10T08:36:44.174 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.0303047Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" 2026-03-10T08:36:44.174 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.030359283Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=54.932µs 2026-03-10T08:36:44.174 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.030808534Z level=info msg="Executing migration" id="add column org_id in alert_configuration" 2026-03-10T08:36:44.174 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.03261167Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=1.803005ms 2026-03-10T08:36:44.174 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.033144827Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" 2026-03-10T08:36:44.174 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.033558322Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=413.424µs 2026-03-10T08:36:44.174 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.034048027Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" 2026-03-10T08:36:44.174 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.035947983Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=1.898343ms 2026-03-10T08:36:44.174 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.036401373Z level=info msg="Executing migration" id=create_ngalert_configuration_table 2026-03-10T08:36:44.174 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.036786473Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=384.9µs 2026-03-10T08:36:44.174 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.037251523Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" 2026-03-10T08:36:44.174 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.037670097Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=418.454µs 2026-03-10T08:36:44.174 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.038172727Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" 2026-03-10T08:36:44.174 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.040063136Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=1.890589ms 2026-03-10T08:36:44.174 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.040564255Z level=info msg="Executing migration" id="create provenance_type table" 2026-03-10T08:36:44.174 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.040905122Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=340.878µs 2026-03-10T08:36:44.174 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.041456614Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" 2026-03-10T08:36:44.174 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.041902849Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=446.144µs 2026-03-10T08:36:44.174 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.042345449Z level=info msg="Executing migration" id="create alert_image table" 2026-03-10T08:36:44.174 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.042702677Z level=info msg="Migration successfully executed" id="create alert_image table" duration=355.966µs 2026-03-10T08:36:44.174 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.043183037Z level=info msg="Executing migration" id="add unique index on token to alert_image table" 2026-03-10T08:36:44.174 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.04356977Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=386.733µs 2026-03-10T08:36:44.174 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.044020043Z level=info msg="Executing migration" id="support longer URLs in alert_image table" 2026-03-10T08:36:44.174 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.044067172Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=47.359µs 2026-03-10T08:36:44.174 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.044528935Z level=info msg="Executing migration" id=create_alert_configuration_history_table 2026-03-10T08:36:44.174 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.044938933Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=409.767µs 2026-03-10T08:36:44.174 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.045428199Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" 2026-03-10T08:36:44.174 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.045871539Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=443.34µs 2026-03-10T08:36:44.174 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.046320108Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" 2026-03-10T08:36:44.174 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.04648725Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" 2026-03-10T08:36:44.174 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.046959245Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" 2026-03-10T08:36:44.174 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.04718696Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=227.335µs 2026-03-10T08:36:44.174 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.047654275Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" 2026-03-10T08:36:44.174 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.048045868Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=391.924µs 2026-03-10T08:36:44.174 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.048473628Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" 2026-03-10T08:36:44.174 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.050463894Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=1.988962ms 2026-03-10T08:36:44.174 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.050960163Z level=info msg="Executing migration" id="create library_element table v1" 2026-03-10T08:36:44.174 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.051412229Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=452.055µs 2026-03-10T08:36:44.174 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.051880586Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" 2026-03-10T08:36:44.174 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.052300311Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=419.776µs 2026-03-10T08:36:44.174 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.052752228Z level=info msg="Executing migration" id="create library_element_connection table v1" 2026-03-10T08:36:44.175 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.053135565Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=384.088µs 2026-03-10T08:36:44.175 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.053594024Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" 2026-03-10T08:36:44.175 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.054021724Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=427.579µs 2026-03-10T08:36:44.175 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.054487215Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" 2026-03-10T08:36:44.175 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.054887424Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=400.038µs 2026-03-10T08:36:44.175 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.055322479Z level=info msg="Executing migration" id="increase max description length to 2048" 2026-03-10T08:36:44.175 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.0553553Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=33.242µs 2026-03-10T08:36:44.175 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.055813457Z level=info msg="Executing migration" id="alter library_element model to mediumtext" 2026-03-10T08:36:44.175 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.055860105Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=46.988µs 2026-03-10T08:36:44.175 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.056303375Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" 2026-03-10T08:36:44.175 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.056450851Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=147.476µs 2026-03-10T08:36:44.175 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.056930168Z level=info msg="Executing migration" id="create data_keys table" 2026-03-10T08:36:44.175 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.057324246Z level=info msg="Migration successfully executed" id="create data_keys table" duration=393.777µs 2026-03-10T08:36:44.175 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.05777023Z level=info msg="Executing migration" id="create secrets table" 2026-03-10T08:36:44.175 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.058109604Z level=info msg="Migration successfully executed" id="create secrets table" duration=339.405µs 2026-03-10T08:36:44.175 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.058566711Z level=info msg="Executing migration" id="rename data_keys name column to id" 2026-03-10T08:36:44.175 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.068222394Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=9.652827ms 2026-03-10T08:36:44.175 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.068863474Z level=info msg="Executing migration" id="add name column into data_keys" 2026-03-10T08:36:44.175 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.070995584Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=2.133453ms 2026-03-10T08:36:44.175 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.071504909Z level=info msg="Executing migration" id="copy data_keys id column values into name" 2026-03-10T08:36:44.175 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.071599495Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=95.609µs 2026-03-10T08:36:44.175 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.072100192Z level=info msg="Executing migration" id="rename data_keys name column to label" 2026-03-10T08:36:44.175 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.081857976Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=9.754037ms 2026-03-10T08:36:44.175 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.082557937Z level=info msg="Executing migration" id="rename data_keys id column back to name" 2026-03-10T08:36:44.175 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.09268947Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=10.124481ms 2026-03-10T08:36:44.175 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.093441429Z level=info msg="Executing migration" id="create kv_store table v1" 2026-03-10T08:36:44.175 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.093921727Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=480.048µs 2026-03-10T08:36:44.175 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.094504107Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" 2026-03-10T08:36:44.175 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.095012879Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=508.832µs 2026-03-10T08:36:44.175 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.095540257Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" 2026-03-10T08:36:44.175 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.095697511Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=157.604µs 2026-03-10T08:36:44.175 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.096270734Z level=info msg="Executing migration" id="create permission table" 2026-03-10T08:36:44.175 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.096732037Z level=info msg="Migration successfully executed" id="create permission table" duration=461.373µs 2026-03-10T08:36:44.175 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.097256049Z level=info msg="Executing migration" id="add unique index permission.role_id" 2026-03-10T08:36:44.175 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.09773279Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=474.087µs 2026-03-10T08:36:44.175 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.098292278Z level=info msg="Executing migration" id="add unique index role_id_action_scope" 2026-03-10T08:36:44.175 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.098808584Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=515.885µs 2026-03-10T08:36:44.175 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.099360257Z level=info msg="Executing migration" id="create role table" 2026-03-10T08:36:44.175 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.099929533Z level=info msg="Migration successfully executed" id="create role table" duration=569.004µs 2026-03-10T08:36:44.175 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.100458803Z level=info msg="Executing migration" id="add column display_name" 2026-03-10T08:36:44.175 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.102877912Z level=info msg="Migration successfully executed" id="add column display_name" duration=2.418939ms 2026-03-10T08:36:44.175 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.103396892Z level=info msg="Executing migration" id="add column group_name" 2026-03-10T08:36:44.175 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.105521449Z level=info msg="Migration successfully executed" id="add column group_name" duration=2.124427ms 2026-03-10T08:36:44.175 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.106052072Z level=info msg="Executing migration" id="add index role.org_id" 2026-03-10T08:36:44.175 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.106503958Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=451.925µs 2026-03-10T08:36:44.175 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.107093642Z level=info msg="Executing migration" id="add unique index role_org_id_name" 2026-03-10T08:36:44.175 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.10755217Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=458.538µs 2026-03-10T08:36:44.175 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.108060161Z level=info msg="Executing migration" id="add index role_org_id_uid" 2026-03-10T08:36:44.175 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.108500686Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=440.455µs 2026-03-10T08:36:44.175 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.109018445Z level=info msg="Executing migration" id="create team role table" 2026-03-10T08:36:44.175 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.109390882Z level=info msg="Migration successfully executed" id="create team role table" duration=372.478µs 2026-03-10T08:36:44.175 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.109908661Z level=info msg="Executing migration" id="add index team_role.org_id" 2026-03-10T08:36:44.175 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.110390533Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=483.475µs 2026-03-10T08:36:44.175 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.11093952Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" 2026-03-10T08:36:44.175 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.111415372Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=475.882µs 2026-03-10T08:36:44.175 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.111988905Z level=info msg="Executing migration" id="add index team_role.team_id" 2026-03-10T08:36:44.175 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.112387741Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=398.386µs 2026-03-10T08:36:44.175 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.112885272Z level=info msg="Executing migration" id="create user role table" 2026-03-10T08:36:44.175 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.113255797Z level=info msg="Migration successfully executed" id="create user role table" duration=370.363µs 2026-03-10T08:36:44.176 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.113777873Z level=info msg="Executing migration" id="add index user_role.org_id" 2026-03-10T08:36:44.176 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.1141871Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=410.217µs 2026-03-10T08:36:44.176 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.114704758Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" 2026-03-10T08:36:44.176 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.115109877Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=404.858µs 2026-03-10T08:36:44.176 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.115647493Z level=info msg="Executing migration" id="add index user_role.user_id" 2026-03-10T08:36:44.176 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.116048283Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=400.66µs 2026-03-10T08:36:44.176 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.116588704Z level=info msg="Executing migration" id="create builtin role table" 2026-03-10T08:36:44.176 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.116961793Z level=info msg="Migration successfully executed" id="create builtin role table" duration=372.908µs 2026-03-10T08:36:44.176 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.11745719Z level=info msg="Executing migration" id="add index builtin_role.role_id" 2026-03-10T08:36:44.176 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.11788509Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=427.64µs 2026-03-10T08:36:44.176 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.118359699Z level=info msg="Executing migration" id="add index builtin_role.name" 2026-03-10T08:36:44.176 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.11879853Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=439.442µs 2026-03-10T08:36:44.176 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.119324244Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" 2026-03-10T08:36:44.176 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.121828162Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=2.503656ms 2026-03-10T08:36:44.176 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.122359886Z level=info msg="Executing migration" id="add index builtin_role.org_id" 2026-03-10T08:36:44.176 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.122814317Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=455.413µs 2026-03-10T08:36:44.176 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.12329104Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" 2026-03-10T08:36:44.176 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.123740622Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=448.02µs 2026-03-10T08:36:44.176 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.124189651Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" 2026-03-10T08:36:44.176 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.124608576Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=419.125µs 2026-03-10T08:36:44.176 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.125071713Z level=info msg="Executing migration" id="add unique index role.uid" 2026-03-10T08:36:44.176 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.125478744Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=406.931µs 2026-03-10T08:36:44.176 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.125972118Z level=info msg="Executing migration" id="create seed assignment table" 2026-03-10T08:36:44.176 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.126309719Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=338.344µs 2026-03-10T08:36:44.176 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.126806189Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" 2026-03-10T08:36:44.176 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.12722356Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=417.221µs 2026-03-10T08:36:44.176 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.127681718Z level=info msg="Executing migration" id="add column hidden to role table" 2026-03-10T08:36:44.176 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.130062704Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=2.380826ms 2026-03-10T08:36:44.176 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.130561418Z level=info msg="Executing migration" id="permission kind migration" 2026-03-10T08:36:44.176 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.132917618Z level=info msg="Migration successfully executed" id="permission kind migration" duration=2.35621ms 2026-03-10T08:36:44.176 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.133428084Z level=info msg="Executing migration" id="permission attribute migration" 2026-03-10T08:36:44.176 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.135737987Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=2.309693ms 2026-03-10T08:36:44.176 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.136201094Z level=info msg="Executing migration" id="permission identifier migration" 2026-03-10T08:36:44.176 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.138433192Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=2.231997ms 2026-03-10T08:36:44.176 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.138898033Z level=info msg="Executing migration" id="add permission identifier index" 2026-03-10T08:36:44.176 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.139333908Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=433.811µs 2026-03-10T08:36:44.176 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.139768683Z level=info msg="Executing migration" id="add permission action scope role_id index" 2026-03-10T08:36:44.176 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.14108732Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.318407ms 2026-03-10T08:36:44.176 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.141593908Z level=info msg="Executing migration" id="remove permission role_id action scope index" 2026-03-10T08:36:44.176 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.14202239Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=427.661µs 2026-03-10T08:36:44.176 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.142468865Z level=info msg="Executing migration" id="create query_history table v1" 2026-03-10T08:36:44.176 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.142874075Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=405.219µs 2026-03-10T08:36:44.176 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.143334205Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" 2026-03-10T08:36:44.176 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.143770462Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=435.997µs 2026-03-10T08:36:44.176 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.144216888Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" 2026-03-10T08:36:44.176 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.144264006Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=47.378µs 2026-03-10T08:36:44.176 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.144721172Z level=info msg="Executing migration" id="rbac disabled migrator" 2026-03-10T08:36:44.176 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.144758823Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=38.082µs 2026-03-10T08:36:44.176 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.14522272Z level=info msg="Executing migration" id="teams permissions migration" 2026-03-10T08:36:44.176 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.145417424Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=195.576µs 2026-03-10T08:36:44.176 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.145914074Z level=info msg="Executing migration" id="dashboard permissions" 2026-03-10T08:36:44.176 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.146195962Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=282.379µs 2026-03-10T08:36:44.176 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.147384487Z level=info msg="Executing migration" id="dashboard permissions uid scopes" 2026-03-10T08:36:44.176 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.147686312Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=302.036µs 2026-03-10T08:36:44.177 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.148189052Z level=info msg="Executing migration" id="drop managed folder create actions" 2026-03-10T08:36:44.177 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.148301664Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=112.701µs 2026-03-10T08:36:44.177 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.148805537Z level=info msg="Executing migration" id="alerting notification permissions" 2026-03-10T08:36:44.177 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.149045275Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=239.82µs 2026-03-10T08:36:44.177 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.149532347Z level=info msg="Executing migration" id="create query_history_star table v1" 2026-03-10T08:36:44.177 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.14992415Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=391.833µs 2026-03-10T08:36:44.177 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.150389943Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" 2026-03-10T08:36:44.177 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.150840105Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=450.052µs 2026-03-10T08:36:44.177 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.15129699Z level=info msg="Executing migration" id="add column org_id in query_history_star" 2026-03-10T08:36:44.177 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.153739281Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=2.441719ms 2026-03-10T08:36:44.177 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.154249136Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" 2026-03-10T08:36:44.177 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.154299811Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=49.773µs 2026-03-10T08:36:44.177 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.15482327Z level=info msg="Executing migration" id="create correlation table v1" 2026-03-10T08:36:44.177 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.15529296Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=469.499µs 2026-03-10T08:36:44.177 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.155803094Z level=info msg="Executing migration" id="add index correlations.uid" 2026-03-10T08:36:44.177 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.15624962Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=446.587µs 2026-03-10T08:36:44.177 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.156721594Z level=info msg="Executing migration" id="add index correlations.source_uid" 2026-03-10T08:36:44.177 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.157135498Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=413.885µs 2026-03-10T08:36:44.177 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.157561716Z level=info msg="Executing migration" id="add correlation config column" 2026-03-10T08:36:44.177 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.16003712Z level=info msg="Migration successfully executed" id="add correlation config column" duration=2.476776ms 2026-03-10T08:36:44.177 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.16055055Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" 2026-03-10T08:36:44.177 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.160997026Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=446.585µs 2026-03-10T08:36:44.177 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.161456276Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" 2026-03-10T08:36:44.177 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.161883746Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=427.33µs 2026-03-10T08:36:44.177 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.162367181Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" 2026-03-10T08:36:44.177 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.168688683Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=6.319019ms 2026-03-10T08:36:44.177 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.169402289Z level=info msg="Executing migration" id="create correlation v2" 2026-03-10T08:36:44.178 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:44 vm03 systemd[1]: Starting Ceph node-exporter.a for aaf0329a-1c5b-11f1-8b6f-7f2d819bb543... 2026-03-10T08:36:44.435 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.170019905Z level=info msg="Migration successfully executed" id="create correlation v2" duration=617.475µs 2026-03-10T08:36:44.435 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.170584812Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" 2026-03-10T08:36:44.435 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.171092253Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=507.751µs 2026-03-10T08:36:44.435 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.171600524Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" 2026-03-10T08:36:44.435 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.172080973Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=480.72µs 2026-03-10T08:36:44.435 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.172666138Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" 2026-03-10T08:36:44.435 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.173145846Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=481.252µs 2026-03-10T08:36:44.435 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.173749055Z level=info msg="Executing migration" id="copy correlation v1 to v2" 2026-03-10T08:36:44.435 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.173917882Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=169.188µs 2026-03-10T08:36:44.435 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.177175679Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" 2026-03-10T08:36:44.435 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.177614059Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=438.16µs 2026-03-10T08:36:44.435 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.178154071Z level=info msg="Executing migration" id="add provisioning column" 2026-03-10T08:36:44.435 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.18077154Z level=info msg="Migration successfully executed" id="add provisioning column" duration=2.617419ms 2026-03-10T08:36:44.435 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.181346145Z level=info msg="Executing migration" id="create entity_events table" 2026-03-10T08:36:44.435 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.181755602Z level=info msg="Migration successfully executed" id="create entity_events table" duration=409.427µs 2026-03-10T08:36:44.435 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.182292156Z level=info msg="Executing migration" id="create dashboard public config v1" 2026-03-10T08:36:44.435 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.182907097Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=614.851µs 2026-03-10T08:36:44.435 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.18346955Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" 2026-03-10T08:36:44.435 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.183696345Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" 2026-03-10T08:36:44.435 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.184284013Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 2026-03-10T08:36:44.435 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.184491132Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 2026-03-10T08:36:44.435 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.185107896Z level=info msg="Executing migration" id="Drop old dashboard public config table" 2026-03-10T08:36:44.435 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.185543111Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=435.225µs 2026-03-10T08:36:44.435 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.186095455Z level=info msg="Executing migration" id="recreate dashboard public config v1" 2026-03-10T08:36:44.435 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.186603205Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=507.691µs 2026-03-10T08:36:44.435 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.187148695Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" 2026-03-10T08:36:44.436 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.187641849Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=492.893µs 2026-03-10T08:36:44.436 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.188148888Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 2026-03-10T08:36:44.436 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.18864159Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=492.702µs 2026-03-10T08:36:44.436 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.189139092Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" 2026-03-10T08:36:44.436 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.189600926Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=462.796µs 2026-03-10T08:36:44.436 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.190139224Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 2026-03-10T08:36:44.436 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.190609433Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=470.21µs 2026-03-10T08:36:44.436 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.191176115Z level=info msg="Executing migration" id="Drop public config table" 2026-03-10T08:36:44.436 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.191557478Z level=info msg="Migration successfully executed" id="Drop public config table" duration=382.505µs 2026-03-10T08:36:44.436 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.192117908Z level=info msg="Executing migration" id="Recreate dashboard public config v2" 2026-03-10T08:36:44.436 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.192614036Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=496.289µs 2026-03-10T08:36:44.436 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.193121476Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" 2026-03-10T08:36:44.436 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.193583801Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=462.435µs 2026-03-10T08:36:44.436 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.194082445Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 2026-03-10T08:36:44.436 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.194609331Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=551.782µs 2026-03-10T08:36:44.436 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.195133532Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" 2026-03-10T08:36:44.436 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.195649297Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=515.665µs 2026-03-10T08:36:44.436 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.196149394Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" 2026-03-10T08:36:44.436 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.204187909Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=8.036412ms 2026-03-10T08:36:44.436 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.204836353Z level=info msg="Executing migration" id="add annotations_enabled column" 2026-03-10T08:36:44.436 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.207718458Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=2.8795ms 2026-03-10T08:36:44.436 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.208343388Z level=info msg="Executing migration" id="add time_selection_enabled column" 2026-03-10T08:36:44.436 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.211087895Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=2.743867ms 2026-03-10T08:36:44.436 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.21161979Z level=info msg="Executing migration" id="delete orphaned public dashboards" 2026-03-10T08:36:44.436 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.211782775Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=163.616µs 2026-03-10T08:36:44.436 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.212331813Z level=info msg="Executing migration" id="add share column" 2026-03-10T08:36:44.436 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.214705045Z level=info msg="Migration successfully executed" id="add share column" duration=2.373193ms 2026-03-10T08:36:44.436 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.215171218Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" 2026-03-10T08:36:44.436 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.215296662Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=125.676µs 2026-03-10T08:36:44.436 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.21579162Z level=info msg="Executing migration" id="create file table" 2026-03-10T08:36:44.436 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.216201967Z level=info msg="Migration successfully executed" id="create file table" duration=410.778µs 2026-03-10T08:36:44.436 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.216725978Z level=info msg="Executing migration" id="file table idx: path natural pk" 2026-03-10T08:36:44.436 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.217196749Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=470.58µs 2026-03-10T08:36:44.436 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.217726341Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" 2026-03-10T08:36:44.436 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.218225956Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=499.804µs 2026-03-10T08:36:44.436 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.218692359Z level=info msg="Executing migration" id="create file_meta table" 2026-03-10T08:36:44.436 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.219082399Z level=info msg="Migration successfully executed" id="create file_meta table" duration=389.879µs 2026-03-10T08:36:44.436 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.21952601Z level=info msg="Executing migration" id="file table idx: path key" 2026-03-10T08:36:44.436 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.219985469Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=459.3µs 2026-03-10T08:36:44.436 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.220440561Z level=info msg="Executing migration" id="set path collation in file table" 2026-03-10T08:36:44.436 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.220509911Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=69.771µs 2026-03-10T08:36:44.436 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.221083164Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" 2026-03-10T08:36:44.436 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.221140881Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=58.199µs 2026-03-10T08:36:44.436 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.221619969Z level=info msg="Executing migration" id="managed permissions migration" 2026-03-10T08:36:44.436 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.221890705Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=270.766µs 2026-03-10T08:36:44.436 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.222410949Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" 2026-03-10T08:36:44.436 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.222537696Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=125.706µs 2026-03-10T08:36:44.436 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.223055054Z level=info msg="Executing migration" id="RBAC action name migrator" 2026-03-10T08:36:44.436 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.223717023Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=662.199µs 2026-03-10T08:36:44.436 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.224204756Z level=info msg="Executing migration" id="Add UID column to playlist" 2026-03-10T08:36:44.436 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.226812818Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=2.607521ms 2026-03-10T08:36:44.436 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.227296283Z level=info msg="Executing migration" id="Update uid column values in playlist" 2026-03-10T08:36:44.437 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.227403133Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=107.491µs 2026-03-10T08:36:44.437 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.227968531Z level=info msg="Executing migration" id="Add index for uid in playlist" 2026-03-10T08:36:44.437 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.228495848Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=528.509µs 2026-03-10T08:36:44.437 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.229031912Z level=info msg="Executing migration" id="update group index for alert rules" 2026-03-10T08:36:44.437 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.229222819Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=189.885µs 2026-03-10T08:36:44.437 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.229721452Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" 2026-03-10T08:36:44.437 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.229853259Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=131.997µs 2026-03-10T08:36:44.437 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.230363644Z level=info msg="Executing migration" id="admin only folder/dashboard permission" 2026-03-10T08:36:44.437 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.23059688Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=233.147µs 2026-03-10T08:36:44.437 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.231128306Z level=info msg="Executing migration" id="add action column to seed_assignment" 2026-03-10T08:36:44.437 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.233661928Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=2.533191ms 2026-03-10T08:36:44.437 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.234136356Z level=info msg="Executing migration" id="add scope column to seed_assignment" 2026-03-10T08:36:44.437 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.23666542Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=2.528863ms 2026-03-10T08:36:44.437 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.237172989Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" 2026-03-10T08:36:44.437 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.237662626Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=489.486µs 2026-03-10T08:36:44.437 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.23816123Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" 2026-03-10T08:36:44.437 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.26403591Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=25.871234ms 2026-03-10T08:36:44.437 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.264882946Z level=info msg="Executing migration" id="add unique index builtin_role_name back" 2026-03-10T08:36:44.437 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.265556457Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=674.152µs 2026-03-10T08:36:44.437 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.266155529Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" 2026-03-10T08:36:44.437 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.266736436Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=580.787µs 2026-03-10T08:36:44.437 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.26729967Z level=info msg="Executing migration" id="add primary key to seed_assigment" 2026-03-10T08:36:44.437 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.275552267Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=8.250934ms 2026-03-10T08:36:44.437 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.276197855Z level=info msg="Executing migration" id="add origin column to seed_assignment" 2026-03-10T08:36:44.437 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.278764579Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=2.566355ms 2026-03-10T08:36:44.437 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.279276929Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" 2026-03-10T08:36:44.437 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.279431478Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=153.327µs 2026-03-10T08:36:44.437 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.280009309Z level=info msg="Executing migration" id="prevent seeding OnCall access" 2026-03-10T08:36:44.437 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.280128833Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=118.552µs 2026-03-10T08:36:44.437 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.280650079Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" 2026-03-10T08:36:44.437 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.280776556Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=126.526µs 2026-03-10T08:36:44.437 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.281279427Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" 2026-03-10T08:36:44.437 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.281399843Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=120.335µs 2026-03-10T08:36:44.437 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.281805221Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" 2026-03-10T08:36:44.437 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.281930215Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=125.024µs 2026-03-10T08:36:44.437 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.282489852Z level=info msg="Executing migration" id="create folder table" 2026-03-10T08:36:44.437 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.282911191Z level=info msg="Migration successfully executed" id="create folder table" duration=421.308µs 2026-03-10T08:36:44.437 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.283361514Z level=info msg="Executing migration" id="Add index for parent_uid" 2026-03-10T08:36:44.437 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.283976796Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=615.182µs 2026-03-10T08:36:44.437 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.28446523Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" 2026-03-10T08:36:44.437 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.284966749Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=501.199µs 2026-03-10T08:36:44.437 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.285438792Z level=info msg="Executing migration" id="Update folder title length" 2026-03-10T08:36:44.437 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.285470211Z level=info msg="Migration successfully executed" id="Update folder title length" duration=31.89µs 2026-03-10T08:36:44.437 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.285990075Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" 2026-03-10T08:36:44.437 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.286496351Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=506.027µs 2026-03-10T08:36:44.437 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.287002169Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" 2026-03-10T08:36:44.437 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.287476236Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=472.954µs 2026-03-10T08:36:44.437 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.288033749Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" 2026-03-10T08:36:44.437 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.288544144Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=510.455µs 2026-03-10T08:36:44.437 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.289069147Z level=info msg="Executing migration" id="Sync dashboard and folder table" 2026-03-10T08:36:44.437 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.289314667Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=245.43µs 2026-03-10T08:36:44.438 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.289816998Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" 2026-03-10T08:36:44.438 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.289971136Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=154.138µs 2026-03-10T08:36:44.438 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.290499895Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" 2026-03-10T08:36:44.438 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.290994041Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=494.106µs 2026-03-10T08:36:44.438 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.291472596Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" 2026-03-10T08:36:44.438 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.292009411Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=536.605µs 2026-03-10T08:36:44.438 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.292467087Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" 2026-03-10T08:36:44.438 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.292958347Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=490.83µs 2026-03-10T08:36:44.438 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.293415161Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" 2026-03-10T08:36:44.438 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.293904698Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=489.848µs 2026-03-10T08:36:44.438 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.294388303Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" 2026-03-10T08:36:44.438 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.294870506Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=482.103µs 2026-03-10T08:36:44.438 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.295341247Z level=info msg="Executing migration" id="create anon_device table" 2026-03-10T08:36:44.438 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.295737419Z level=info msg="Migration successfully executed" id="create anon_device table" duration=396.132µs 2026-03-10T08:36:44.438 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.296207368Z level=info msg="Executing migration" id="add unique index anon_device.device_id" 2026-03-10T08:36:44.438 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.296752368Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=545.13µs 2026-03-10T08:36:44.438 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.297274526Z level=info msg="Executing migration" id="add index anon_device.updated_at" 2026-03-10T08:36:44.438 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.297783088Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=508.802µs 2026-03-10T08:36:44.438 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.298296158Z level=info msg="Executing migration" id="create signing_key table" 2026-03-10T08:36:44.438 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.298778821Z level=info msg="Migration successfully executed" id="create signing_key table" duration=482.653µs 2026-03-10T08:36:44.438 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.299283386Z level=info msg="Executing migration" id="add unique index signing_key.key_id" 2026-03-10T08:36:44.438 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.299765939Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=481.31µs 2026-03-10T08:36:44.438 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.300240909Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" 2026-03-10T08:36:44.438 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.300755641Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=515.114µs 2026-03-10T08:36:44.438 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.301214812Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" 2026-03-10T08:36:44.438 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.301380251Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=165.659µs 2026-03-10T08:36:44.438 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.301907287Z level=info msg="Executing migration" id="Add folder_uid for dashboard" 2026-03-10T08:36:44.438 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.304666552Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=2.758624ms 2026-03-10T08:36:44.438 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.305234195Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" 2026-03-10T08:36:44.438 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.305607233Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=374.552µs 2026-03-10T08:36:44.438 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.306163374Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" 2026-03-10T08:36:44.438 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.306686873Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=523.429µs 2026-03-10T08:36:44.438 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.307176199Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" 2026-03-10T08:36:44.438 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.307670034Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=494.064µs 2026-03-10T08:36:44.438 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.308161103Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" 2026-03-10T08:36:44.438 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.308647082Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=486.119µs 2026-03-10T08:36:44.438 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.30918509Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" 2026-03-10T08:36:44.438 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.309704523Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=519.402µs 2026-03-10T08:36:44.438 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.310187466Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" 2026-03-10T08:36:44.438 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.310705466Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=516.557µs 2026-03-10T08:36:44.438 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.311172199Z level=info msg="Executing migration" id="create sso_setting table" 2026-03-10T08:36:44.438 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.311612283Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=440.396µs 2026-03-10T08:36:44.438 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.312211114Z level=info msg="Executing migration" id="copy kvstore migration status to each org" 2026-03-10T08:36:44.438 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.312671746Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=461.084µs 2026-03-10T08:36:44.438 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.313193061Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" 2026-03-10T08:36:44.438 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.313343273Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=150.642µs 2026-03-10T08:36:44.438 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.31387563Z level=info msg="Executing migration" id="alter kv_store.value to longtext" 2026-03-10T08:36:44.438 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.313935401Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=59.591µs 2026-03-10T08:36:44.438 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.314466866Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" 2026-03-10T08:36:44.438 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.317182479Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=2.715452ms 2026-03-10T08:36:44.438 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.317687975Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" 2026-03-10T08:36:44.438 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.320368683Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=2.680286ms 2026-03-10T08:36:44.438 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.320852949Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" 2026-03-10T08:36:44.438 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.321047944Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=195.146µs 2026-03-10T08:36:44.438 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=migrator t=2026-03-10T08:36:44.321553029Z level=info msg="migrations completed" performed=547 skipped=0 duration=761.893316ms 2026-03-10T08:36:44.438 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=sqlstore t=2026-03-10T08:36:44.322220229Z level=info msg="Created default organization" 2026-03-10T08:36:44.438 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=secrets t=2026-03-10T08:36:44.322906884Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 2026-03-10T08:36:44.438 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=plugin.store t=2026-03-10T08:36:44.330612357Z level=info msg="Loading plugins..." 2026-03-10T08:36:44.438 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=local.finder t=2026-03-10T08:36:44.366960953Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled 2026-03-10T08:36:44.438 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=plugin.store t=2026-03-10T08:36:44.367037336Z level=info msg="Plugins loaded" count=55 duration=36.425691ms 2026-03-10T08:36:44.438 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=query_data t=2026-03-10T08:36:44.368399114Z level=info msg="Query Service initialization" 2026-03-10T08:36:44.438 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=live.push_http t=2026-03-10T08:36:44.374985863Z level=info msg="Live Push Gateway initialization" 2026-03-10T08:36:44.439 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=ngalert.migration t=2026-03-10T08:36:44.37673084Z level=info msg=Starting 2026-03-10T08:36:44.439 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=ngalert.migration t=2026-03-10T08:36:44.377000314Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false 2026-03-10T08:36:44.439 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=ngalert.migration orgID=1 t=2026-03-10T08:36:44.377233711Z level=info msg="Migrating alerts for organisation" 2026-03-10T08:36:44.439 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=ngalert.migration orgID=1 t=2026-03-10T08:36:44.377561174Z level=info msg="Alerts found to migrate" alerts=0 2026-03-10T08:36:44.439 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=ngalert.migration t=2026-03-10T08:36:44.378320135Z level=info msg="Completed alerting migration" 2026-03-10T08:36:44.439 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=ngalert.state.manager t=2026-03-10T08:36:44.3853431Z level=info msg="Running in alternative execution of Error/NoData mode" 2026-03-10T08:36:44.439 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=infra.usagestats.collector t=2026-03-10T08:36:44.386342199Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 2026-03-10T08:36:44.439 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=provisioning.datasources t=2026-03-10T08:36:44.387469721Z level=info msg="inserting datasource from configuration" name=Dashboard1 uid=P43CA22E17D0F9596 2026-03-10T08:36:44.439 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=provisioning.datasources t=2026-03-10T08:36:44.392033692Z level=info msg="inserting datasource from configuration" name=Loki uid=P8E80F9AEF21F6940 2026-03-10T08:36:44.439 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=provisioning.alerting t=2026-03-10T08:36:44.397412311Z level=info msg="starting to provision alerting" 2026-03-10T08:36:44.439 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=provisioning.alerting t=2026-03-10T08:36:44.39746544Z level=info msg="finished to provision alerting" 2026-03-10T08:36:44.439 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=grafanaStorageLogger t=2026-03-10T08:36:44.397597337Z level=info msg="Storage starting" 2026-03-10T08:36:44.439 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=http.server t=2026-03-10T08:36:44.398807401Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA 2026-03-10T08:36:44.439 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=http.server t=2026-03-10T08:36:44.399134013Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=https subUrl= socket= 2026-03-10T08:36:44.439 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=ngalert.state.manager t=2026-03-10T08:36:44.399205777Z level=info msg="Warming state cache for startup" 2026-03-10T08:36:44.439 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=ngalert.state.manager t=2026-03-10T08:36:44.40002972Z level=info msg="State cache has been initialized" states=0 duration=823.421µs 2026-03-10T08:36:44.439 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=provisioning.dashboard t=2026-03-10T08:36:44.401156359Z level=info msg="starting to provision dashboards" 2026-03-10T08:36:44.439 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=sqlstore.transactions t=2026-03-10T08:36:44.412758384Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 2026-03-10T08:36:44.439 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=ngalert.multiorg.alertmanager t=2026-03-10T08:36:44.414798213Z level=info msg="Starting MultiOrg Alertmanager" 2026-03-10T08:36:44.439 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=ngalert.scheduler t=2026-03-10T08:36:44.414822198Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1 2026-03-10T08:36:44.439 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=ticker t=2026-03-10T08:36:44.414943996Z level=info msg=starting first_tick=2026-03-10T08:36:50Z 2026-03-10T08:36:44.439 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=sqlstore.transactions t=2026-03-10T08:36:44.423391517Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked" 2026-03-10T08:36:44.647 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:44 vm03 ceph-mon[50703]: Deploying daemon node-exporter.a on vm03 2026-03-10T08:36:44.647 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:44 vm03 bash[85636]: Trying to pull quay.io/prometheus/node-exporter:v1.7.0... 2026-03-10T08:36:44.839 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=sqlstore.transactions t=2026-03-10T08:36:44.434138995Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=2 code="database is locked" 2026-03-10T08:36:44.840 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=provisioning.dashboard t=2026-03-10T08:36:44.537275161Z level=info msg="finished to provision dashboards" 2026-03-10T08:36:44.840 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=plugins.update.checker t=2026-03-10T08:36:44.537413871Z level=info msg="Update check succeeded" duration=124.292166ms 2026-03-10T08:36:44.840 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=grafana-apiserver t=2026-03-10T08:36:44.649052021Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" 2026-03-10T08:36:44.840 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:36:44 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=grafana-apiserver t=2026-03-10T08:36:44.649478449Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" 2026-03-10T08:36:44.840 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:44 vm06 ceph-mon[54477]: Deploying daemon node-exporter.a on vm03 2026-03-10T08:36:44.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:44 vm03 ceph-mon[57160]: Deploying daemon node-exporter.a on vm03 2026-03-10T08:36:45.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:45 vm03 ceph-mon[57160]: pgmap v10: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T08:36:45.928 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:45 vm03 bash[85636]: Getting image source signatures 2026-03-10T08:36:45.928 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:45 vm03 bash[85636]: Copying blob sha256:324153f2810a9927fcce320af9e4e291e0b6e805cbdd1f338386c756b9defa24 2026-03-10T08:36:45.928 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:45 vm03 bash[85636]: Copying blob sha256:2abcce694348cd2c949c0e98a7400ebdfd8341021bcf6b541bc72033ce982510 2026-03-10T08:36:45.928 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:45 vm03 bash[85636]: Copying blob sha256:455fd88e5221bc1e278ef2d059cd70e4df99a24e5af050ede621534276f6cf9a 2026-03-10T08:36:45.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:45 vm03 ceph-mon[50703]: pgmap v10: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T08:36:46.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:45 vm06 ceph-mon[54477]: pgmap v10: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T08:36:46.679 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:46 vm03 bash[85636]: Copying config sha256:72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e 2026-03-10T08:36:46.679 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:46 vm03 bash[85636]: Writing manifest to image destination 2026-03-10T08:36:46.679 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:46 vm03 podman[85636]: 2026-03-10 08:36:46.279226751 +0000 UTC m=+2.109699147 container create d80da177b8ae53b5fbe0c5b8055ff91ddea30542815ccb27fdcd9597578cd1a1 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-a, maintainer=The Prometheus Authors ) 2026-03-10T08:36:46.679 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:46 vm03 podman[85636]: 2026-03-10 08:36:46.268873376 +0000 UTC m=+2.099345782 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0 2026-03-10T08:36:46.679 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:46 vm03 podman[85636]: 2026-03-10 08:36:46.30417443 +0000 UTC m=+2.134646816 container init d80da177b8ae53b5fbe0c5b8055ff91ddea30542815ccb27fdcd9597578cd1a1 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-a, maintainer=The Prometheus Authors ) 2026-03-10T08:36:46.679 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:46 vm03 podman[85636]: 2026-03-10 08:36:46.307489814 +0000 UTC m=+2.137962200 container start d80da177b8ae53b5fbe0c5b8055ff91ddea30542815ccb27fdcd9597578cd1a1 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-a, maintainer=The Prometheus Authors ) 2026-03-10T08:36:46.679 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:46 vm03 bash[85636]: d80da177b8ae53b5fbe0c5b8055ff91ddea30542815ccb27fdcd9597578cd1a1 2026-03-10T08:36:46.679 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:46 vm03 systemd[1]: Started Ceph node-exporter.a for aaf0329a-1c5b-11f1-8b6f-7f2d819bb543. 2026-03-10T08:36:46.679 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:46 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-a[85690]: ts=2026-03-10T08:36:46.315Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)" 2026-03-10T08:36:46.679 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:46 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-a[85690]: ts=2026-03-10T08:36:46.315Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)" 2026-03-10T08:36:46.679 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:46 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-a[85690]: ts=2026-03-10T08:36:46.316Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/) 2026-03-10T08:36:46.679 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:46 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-a[85690]: ts=2026-03-10T08:36:46.316Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$ 2026-03-10T08:36:46.679 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:46 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-a[85690]: ts=2026-03-10T08:36:46.317Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$ 2026-03-10T08:36:46.679 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:46 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-a[85690]: ts=2026-03-10T08:36:46.317Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data 2026-03-10T08:36:46.680 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:46 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-a[85690]: ts=2026-03-10T08:36:46.317Z caller=node_exporter.go:110 level=info msg="Enabled collectors" 2026-03-10T08:36:46.680 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:46 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-a[85690]: ts=2026-03-10T08:36:46.317Z caller=node_exporter.go:117 level=info collector=arp 2026-03-10T08:36:46.680 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:46 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-a[85690]: ts=2026-03-10T08:36:46.317Z caller=node_exporter.go:117 level=info collector=bcache 2026-03-10T08:36:46.680 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:46 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-a[85690]: ts=2026-03-10T08:36:46.317Z caller=node_exporter.go:117 level=info collector=bonding 2026-03-10T08:36:46.680 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:46 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-a[85690]: ts=2026-03-10T08:36:46.317Z caller=node_exporter.go:117 level=info collector=btrfs 2026-03-10T08:36:46.680 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:46 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-a[85690]: ts=2026-03-10T08:36:46.317Z caller=node_exporter.go:117 level=info collector=conntrack 2026-03-10T08:36:46.680 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:46 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-a[85690]: ts=2026-03-10T08:36:46.317Z caller=node_exporter.go:117 level=info collector=cpu 2026-03-10T08:36:46.680 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:46 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-a[85690]: ts=2026-03-10T08:36:46.317Z caller=node_exporter.go:117 level=info collector=cpufreq 2026-03-10T08:36:46.680 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:46 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-a[85690]: ts=2026-03-10T08:36:46.317Z caller=node_exporter.go:117 level=info collector=diskstats 2026-03-10T08:36:46.680 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:46 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-a[85690]: ts=2026-03-10T08:36:46.317Z caller=node_exporter.go:117 level=info collector=dmi 2026-03-10T08:36:46.680 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:46 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-a[85690]: ts=2026-03-10T08:36:46.317Z caller=node_exporter.go:117 level=info collector=edac 2026-03-10T08:36:46.680 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:46 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-a[85690]: ts=2026-03-10T08:36:46.317Z caller=node_exporter.go:117 level=info collector=entropy 2026-03-10T08:36:46.680 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:46 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-a[85690]: ts=2026-03-10T08:36:46.317Z caller=node_exporter.go:117 level=info collector=fibrechannel 2026-03-10T08:36:46.680 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:46 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-a[85690]: ts=2026-03-10T08:36:46.317Z caller=node_exporter.go:117 level=info collector=filefd 2026-03-10T08:36:46.680 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:46 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-a[85690]: ts=2026-03-10T08:36:46.317Z caller=node_exporter.go:117 level=info collector=filesystem 2026-03-10T08:36:46.680 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:46 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-a[85690]: ts=2026-03-10T08:36:46.317Z caller=node_exporter.go:117 level=info collector=hwmon 2026-03-10T08:36:46.680 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:46 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-a[85690]: ts=2026-03-10T08:36:46.317Z caller=node_exporter.go:117 level=info collector=infiniband 2026-03-10T08:36:46.680 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:46 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-a[85690]: ts=2026-03-10T08:36:46.317Z caller=node_exporter.go:117 level=info collector=ipvs 2026-03-10T08:36:46.680 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:46 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-a[85690]: ts=2026-03-10T08:36:46.317Z caller=node_exporter.go:117 level=info collector=loadavg 2026-03-10T08:36:46.680 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:46 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-a[85690]: ts=2026-03-10T08:36:46.317Z caller=node_exporter.go:117 level=info collector=mdadm 2026-03-10T08:36:46.680 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:46 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-a[85690]: ts=2026-03-10T08:36:46.317Z caller=node_exporter.go:117 level=info collector=meminfo 2026-03-10T08:36:46.680 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:46 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-a[85690]: ts=2026-03-10T08:36:46.317Z caller=node_exporter.go:117 level=info collector=netclass 2026-03-10T08:36:46.680 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:46 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-a[85690]: ts=2026-03-10T08:36:46.317Z caller=node_exporter.go:117 level=info collector=netdev 2026-03-10T08:36:46.680 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:46 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-a[85690]: ts=2026-03-10T08:36:46.317Z caller=node_exporter.go:117 level=info collector=netstat 2026-03-10T08:36:46.680 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:46 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-a[85690]: ts=2026-03-10T08:36:46.317Z caller=node_exporter.go:117 level=info collector=nfs 2026-03-10T08:36:46.680 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:46 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-a[85690]: ts=2026-03-10T08:36:46.317Z caller=node_exporter.go:117 level=info collector=nfsd 2026-03-10T08:36:46.680 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:46 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-a[85690]: ts=2026-03-10T08:36:46.317Z caller=node_exporter.go:117 level=info collector=nvme 2026-03-10T08:36:46.680 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:46 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-a[85690]: ts=2026-03-10T08:36:46.317Z caller=node_exporter.go:117 level=info collector=os 2026-03-10T08:36:46.680 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:46 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-a[85690]: ts=2026-03-10T08:36:46.317Z caller=node_exporter.go:117 level=info collector=powersupplyclass 2026-03-10T08:36:46.680 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:46 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-a[85690]: ts=2026-03-10T08:36:46.317Z caller=node_exporter.go:117 level=info collector=pressure 2026-03-10T08:36:46.680 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:46 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-a[85690]: ts=2026-03-10T08:36:46.317Z caller=node_exporter.go:117 level=info collector=rapl 2026-03-10T08:36:46.680 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:46 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-a[85690]: ts=2026-03-10T08:36:46.317Z caller=node_exporter.go:117 level=info collector=schedstat 2026-03-10T08:36:46.680 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:46 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-a[85690]: ts=2026-03-10T08:36:46.317Z caller=node_exporter.go:117 level=info collector=selinux 2026-03-10T08:36:46.680 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:46 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-a[85690]: ts=2026-03-10T08:36:46.317Z caller=node_exporter.go:117 level=info collector=sockstat 2026-03-10T08:36:46.680 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:46 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-a[85690]: ts=2026-03-10T08:36:46.317Z caller=node_exporter.go:117 level=info collector=softnet 2026-03-10T08:36:46.680 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:46 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-a[85690]: ts=2026-03-10T08:36:46.317Z caller=node_exporter.go:117 level=info collector=stat 2026-03-10T08:36:46.680 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:46 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-a[85690]: ts=2026-03-10T08:36:46.317Z caller=node_exporter.go:117 level=info collector=tapestats 2026-03-10T08:36:46.680 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:46 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-a[85690]: ts=2026-03-10T08:36:46.317Z caller=node_exporter.go:117 level=info collector=textfile 2026-03-10T08:36:46.680 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:46 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-a[85690]: ts=2026-03-10T08:36:46.317Z caller=node_exporter.go:117 level=info collector=thermal_zone 2026-03-10T08:36:46.680 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:46 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-a[85690]: ts=2026-03-10T08:36:46.317Z caller=node_exporter.go:117 level=info collector=time 2026-03-10T08:36:46.680 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:46 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-a[85690]: ts=2026-03-10T08:36:46.317Z caller=node_exporter.go:117 level=info collector=udp_queues 2026-03-10T08:36:46.680 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:46 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-a[85690]: ts=2026-03-10T08:36:46.317Z caller=node_exporter.go:117 level=info collector=uname 2026-03-10T08:36:46.680 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:46 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-a[85690]: ts=2026-03-10T08:36:46.317Z caller=node_exporter.go:117 level=info collector=vmstat 2026-03-10T08:36:46.680 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:46 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-a[85690]: ts=2026-03-10T08:36:46.317Z caller=node_exporter.go:117 level=info collector=xfs 2026-03-10T08:36:46.680 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:46 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-a[85690]: ts=2026-03-10T08:36:46.317Z caller=node_exporter.go:117 level=info collector=zfs 2026-03-10T08:36:46.680 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:46 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-a[85690]: ts=2026-03-10T08:36:46.317Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100 2026-03-10T08:36:46.680 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:36:46 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-a[85690]: ts=2026-03-10T08:36:46.317Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100 2026-03-10T08:36:46.956 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:46 vm06 systemd[1]: Starting Ceph node-exporter.b for aaf0329a-1c5b-11f1-8b6f-7f2d819bb543... 2026-03-10T08:36:47.340 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:47 vm06 bash[80781]: Trying to pull quay.io/prometheus/node-exporter:v1.7.0... 2026-03-10T08:36:47.342 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 10 08:36:47 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-alertmanager-a[85425]: ts=2026-03-10T08:36:47.060Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.003124647s 2026-03-10T08:36:47.345 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:47 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:47.345 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:47 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:47.345 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:47 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:36:47.345 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:47 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:47.345 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:47 vm03 ceph-mon[50703]: Deploying daemon node-exporter.b on vm06 2026-03-10T08:36:47.345 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:47 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:47.345 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:47 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:47.345 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:47 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:47.345 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:47 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:36:47.345 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:47 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:47.345 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:47 vm03 ceph-mon[57160]: Deploying daemon node-exporter.b on vm06 2026-03-10T08:36:47.345 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:47 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:47.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:47 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:47.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:47 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:47.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:47 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:36:47.840 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:47 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:47.840 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:47 vm06 ceph-mon[54477]: Deploying daemon node-exporter.b on vm06 2026-03-10T08:36:47.840 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:47 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:48.589 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:48 vm06 bash[80781]: Getting image source signatures 2026-03-10T08:36:48.589 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:48 vm06 bash[80781]: Copying blob sha256:324153f2810a9927fcce320af9e4e291e0b6e805cbdd1f338386c756b9defa24 2026-03-10T08:36:48.589 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:48 vm06 bash[80781]: Copying blob sha256:2abcce694348cd2c949c0e98a7400ebdfd8341021bcf6b541bc72033ce982510 2026-03-10T08:36:48.589 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:48 vm06 bash[80781]: Copying blob sha256:455fd88e5221bc1e278ef2d059cd70e4df99a24e5af050ede621534276f6cf9a 2026-03-10T08:36:49.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:48 vm06 ceph-mon[54477]: pgmap v11: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T08:36:49.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:48 vm03 ceph-mon[50703]: pgmap v11: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T08:36:49.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:48 vm03 ceph-mon[57160]: pgmap v11: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T08:36:49.800 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:49 vm03 ceph-mon[50703]: pgmap v12: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:36:49.800 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:36:49 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: ::ffff:192.168.123.106 - - [10/Mar/2026:08:36:49] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T08:36:49.800 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:49 vm03 ceph-mon[57160]: pgmap v12: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:36:50.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:49 vm06 ceph-mon[54477]: pgmap v12: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:36:50.089 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:49 vm06 bash[80781]: Copying config sha256:72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e 2026-03-10T08:36:50.089 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:49 vm06 bash[80781]: Writing manifest to image destination 2026-03-10T08:36:50.090 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:49 vm06 podman[80781]: 2026-03-10 08:36:49.931106619 +0000 UTC m=+2.891013210 container create b8061d8ffe75cec153f8abee67d6084c7737a3aa6449c2d45f90aa6a2bb328db (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-b, maintainer=The Prometheus Authors ) 2026-03-10T08:36:50.090 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:49 vm06 podman[80781]: 2026-03-10 08:36:49.924779264 +0000 UTC m=+2.884685855 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0 2026-03-10T08:36:50.090 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:49 vm06 podman[80781]: 2026-03-10 08:36:49.964837488 +0000 UTC m=+2.924744079 container init b8061d8ffe75cec153f8abee67d6084c7737a3aa6449c2d45f90aa6a2bb328db (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-b, maintainer=The Prometheus Authors ) 2026-03-10T08:36:50.090 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:49 vm06 podman[80781]: 2026-03-10 08:36:49.967511404 +0000 UTC m=+2.927417995 container start b8061d8ffe75cec153f8abee67d6084c7737a3aa6449c2d45f90aa6a2bb328db (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-b, maintainer=The Prometheus Authors ) 2026-03-10T08:36:50.090 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:49 vm06 bash[80781]: b8061d8ffe75cec153f8abee67d6084c7737a3aa6449c2d45f90aa6a2bb328db 2026-03-10T08:36:50.090 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:49 vm06 systemd[1]: Started Ceph node-exporter.b for aaf0329a-1c5b-11f1-8b6f-7f2d819bb543. 2026-03-10T08:36:50.090 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:49 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-b[80835]: ts=2026-03-10T08:36:49.976Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)" 2026-03-10T08:36:50.090 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:49 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-b[80835]: ts=2026-03-10T08:36:49.976Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)" 2026-03-10T08:36:50.090 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:49 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-b[80835]: ts=2026-03-10T08:36:49.980Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$ 2026-03-10T08:36:50.090 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:49 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-b[80835]: ts=2026-03-10T08:36:49.980Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data 2026-03-10T08:36:50.090 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:49 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-b[80835]: ts=2026-03-10T08:36:49.980Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/) 2026-03-10T08:36:50.090 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:49 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-b[80835]: ts=2026-03-10T08:36:49.980Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$ 2026-03-10T08:36:50.090 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:49 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-b[80835]: ts=2026-03-10T08:36:49.980Z caller=node_exporter.go:110 level=info msg="Enabled collectors" 2026-03-10T08:36:50.090 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:49 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-b[80835]: ts=2026-03-10T08:36:49.980Z caller=node_exporter.go:117 level=info collector=arp 2026-03-10T08:36:50.090 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:49 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-b[80835]: ts=2026-03-10T08:36:49.980Z caller=node_exporter.go:117 level=info collector=bcache 2026-03-10T08:36:50.090 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:49 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-b[80835]: ts=2026-03-10T08:36:49.980Z caller=node_exporter.go:117 level=info collector=bonding 2026-03-10T08:36:50.090 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:49 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-b[80835]: ts=2026-03-10T08:36:49.980Z caller=node_exporter.go:117 level=info collector=btrfs 2026-03-10T08:36:50.090 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:49 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-b[80835]: ts=2026-03-10T08:36:49.980Z caller=node_exporter.go:117 level=info collector=conntrack 2026-03-10T08:36:50.090 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:49 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-b[80835]: ts=2026-03-10T08:36:49.980Z caller=node_exporter.go:117 level=info collector=cpu 2026-03-10T08:36:50.090 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:49 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-b[80835]: ts=2026-03-10T08:36:49.980Z caller=node_exporter.go:117 level=info collector=cpufreq 2026-03-10T08:36:50.090 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:49 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-b[80835]: ts=2026-03-10T08:36:49.980Z caller=node_exporter.go:117 level=info collector=diskstats 2026-03-10T08:36:50.090 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:49 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-b[80835]: ts=2026-03-10T08:36:49.980Z caller=node_exporter.go:117 level=info collector=dmi 2026-03-10T08:36:50.090 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:49 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-b[80835]: ts=2026-03-10T08:36:49.980Z caller=node_exporter.go:117 level=info collector=edac 2026-03-10T08:36:50.090 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:49 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-b[80835]: ts=2026-03-10T08:36:49.980Z caller=node_exporter.go:117 level=info collector=entropy 2026-03-10T08:36:50.090 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:49 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-b[80835]: ts=2026-03-10T08:36:49.980Z caller=node_exporter.go:117 level=info collector=fibrechannel 2026-03-10T08:36:50.090 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:49 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-b[80835]: ts=2026-03-10T08:36:49.980Z caller=node_exporter.go:117 level=info collector=filefd 2026-03-10T08:36:50.090 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:49 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-b[80835]: ts=2026-03-10T08:36:49.980Z caller=node_exporter.go:117 level=info collector=filesystem 2026-03-10T08:36:50.090 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:49 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-b[80835]: ts=2026-03-10T08:36:49.980Z caller=node_exporter.go:117 level=info collector=hwmon 2026-03-10T08:36:50.090 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:49 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-b[80835]: ts=2026-03-10T08:36:49.980Z caller=node_exporter.go:117 level=info collector=infiniband 2026-03-10T08:36:50.090 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:49 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-b[80835]: ts=2026-03-10T08:36:49.980Z caller=node_exporter.go:117 level=info collector=ipvs 2026-03-10T08:36:50.090 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:49 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-b[80835]: ts=2026-03-10T08:36:49.980Z caller=node_exporter.go:117 level=info collector=loadavg 2026-03-10T08:36:50.090 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:49 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-b[80835]: ts=2026-03-10T08:36:49.980Z caller=node_exporter.go:117 level=info collector=mdadm 2026-03-10T08:36:50.090 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:49 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-b[80835]: ts=2026-03-10T08:36:49.980Z caller=node_exporter.go:117 level=info collector=meminfo 2026-03-10T08:36:50.090 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:49 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-b[80835]: ts=2026-03-10T08:36:49.980Z caller=node_exporter.go:117 level=info collector=netclass 2026-03-10T08:36:50.090 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:49 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-b[80835]: ts=2026-03-10T08:36:49.980Z caller=node_exporter.go:117 level=info collector=netdev 2026-03-10T08:36:50.090 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:49 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-b[80835]: ts=2026-03-10T08:36:49.980Z caller=node_exporter.go:117 level=info collector=netstat 2026-03-10T08:36:50.090 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:49 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-b[80835]: ts=2026-03-10T08:36:49.980Z caller=node_exporter.go:117 level=info collector=nfs 2026-03-10T08:36:50.090 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:49 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-b[80835]: ts=2026-03-10T08:36:49.981Z caller=node_exporter.go:117 level=info collector=nfsd 2026-03-10T08:36:50.090 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:49 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-b[80835]: ts=2026-03-10T08:36:49.981Z caller=node_exporter.go:117 level=info collector=nvme 2026-03-10T08:36:50.090 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:49 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-b[80835]: ts=2026-03-10T08:36:49.981Z caller=node_exporter.go:117 level=info collector=os 2026-03-10T08:36:50.090 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:49 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-b[80835]: ts=2026-03-10T08:36:49.981Z caller=node_exporter.go:117 level=info collector=powersupplyclass 2026-03-10T08:36:50.090 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:49 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-b[80835]: ts=2026-03-10T08:36:49.981Z caller=node_exporter.go:117 level=info collector=pressure 2026-03-10T08:36:50.090 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:49 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-b[80835]: ts=2026-03-10T08:36:49.981Z caller=node_exporter.go:117 level=info collector=rapl 2026-03-10T08:36:50.090 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:49 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-b[80835]: ts=2026-03-10T08:36:49.981Z caller=node_exporter.go:117 level=info collector=schedstat 2026-03-10T08:36:50.091 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:49 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-b[80835]: ts=2026-03-10T08:36:49.981Z caller=node_exporter.go:117 level=info collector=selinux 2026-03-10T08:36:50.091 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:49 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-b[80835]: ts=2026-03-10T08:36:49.981Z caller=node_exporter.go:117 level=info collector=sockstat 2026-03-10T08:36:50.091 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:49 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-b[80835]: ts=2026-03-10T08:36:49.981Z caller=node_exporter.go:117 level=info collector=softnet 2026-03-10T08:36:50.091 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:49 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-b[80835]: ts=2026-03-10T08:36:49.981Z caller=node_exporter.go:117 level=info collector=stat 2026-03-10T08:36:50.091 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:49 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-b[80835]: ts=2026-03-10T08:36:49.981Z caller=node_exporter.go:117 level=info collector=tapestats 2026-03-10T08:36:50.091 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:49 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-b[80835]: ts=2026-03-10T08:36:49.981Z caller=node_exporter.go:117 level=info collector=textfile 2026-03-10T08:36:50.091 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:49 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-b[80835]: ts=2026-03-10T08:36:49.981Z caller=node_exporter.go:117 level=info collector=thermal_zone 2026-03-10T08:36:50.091 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:49 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-b[80835]: ts=2026-03-10T08:36:49.981Z caller=node_exporter.go:117 level=info collector=time 2026-03-10T08:36:50.091 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:49 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-b[80835]: ts=2026-03-10T08:36:49.981Z caller=node_exporter.go:117 level=info collector=udp_queues 2026-03-10T08:36:50.091 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:49 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-b[80835]: ts=2026-03-10T08:36:49.981Z caller=node_exporter.go:117 level=info collector=uname 2026-03-10T08:36:50.091 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:49 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-b[80835]: ts=2026-03-10T08:36:49.981Z caller=node_exporter.go:117 level=info collector=vmstat 2026-03-10T08:36:50.091 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:49 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-b[80835]: ts=2026-03-10T08:36:49.981Z caller=node_exporter.go:117 level=info collector=xfs 2026-03-10T08:36:50.091 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:49 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-b[80835]: ts=2026-03-10T08:36:49.981Z caller=node_exporter.go:117 level=info collector=zfs 2026-03-10T08:36:50.091 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:49 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-b[80835]: ts=2026-03-10T08:36:49.981Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100 2026-03-10T08:36:50.091 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:36:49 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-b[80835]: ts=2026-03-10T08:36:49.981Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100 2026-03-10T08:36:51.133 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:51 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:51.133 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:51 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:51.133 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:51 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:51.133 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:51 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:51.133 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:51 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:51.133 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:51 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:36:51.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:51 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:51.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:51 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:51.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:51 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:51.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:51 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:51.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:51 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:51.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:51 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:36:51.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:51 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:51.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:51 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:51.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:51 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:51.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:51 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:51.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:51 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:51.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:51 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:36:51.989 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 10 08:36:51 vm03 systemd[1]: Stopping Ceph alertmanager.a for aaf0329a-1c5b-11f1-8b6f-7f2d819bb543... 2026-03-10T08:36:52.240 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:52 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:52.240 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:52 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:52.240 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:52 vm03 ceph-mon[57160]: pgmap v13: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:36:52.240 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:52 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:52.240 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:52 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:52.240 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:52 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:52.240 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:52 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:36:52.240 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:52 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:36:52.240 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:52 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:52.240 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 10 08:36:51 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-alertmanager-a[85425]: ts=2026-03-10T08:36:51.988Z caller=main.go:583 level=info msg="Received SIGTERM, exiting gracefully..." 2026-03-10T08:36:52.240 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 10 08:36:51 vm03 podman[86190]: 2026-03-10 08:36:51.99881478 +0000 UTC m=+0.027280112 container died 7e0204b4c6ab4516eb314e3b876d1d06c70817fcf32fde086e612633e331487a (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-alertmanager-a, maintainer=The Prometheus Authors ) 2026-03-10T08:36:52.240 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 10 08:36:52 vm03 podman[86190]: 2026-03-10 08:36:52.016152611 +0000 UTC m=+0.044617932 container remove 7e0204b4c6ab4516eb314e3b876d1d06c70817fcf32fde086e612633e331487a (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-alertmanager-a, maintainer=The Prometheus Authors ) 2026-03-10T08:36:52.240 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 10 08:36:52 vm03 podman[86190]: 2026-03-10 08:36:52.017463004 +0000 UTC m=+0.045928336 volume remove acfc2440fdd7247c24babe696f00f3cbe820183796293db1698df5ef1f8edd78 2026-03-10T08:36:52.240 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 10 08:36:52 vm03 bash[86190]: ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-alertmanager-a 2026-03-10T08:36:52.240 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 10 08:36:52 vm03 systemd[1]: ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@alertmanager.a.service: Deactivated successfully. 2026-03-10T08:36:52.240 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 10 08:36:52 vm03 systemd[1]: Stopped Ceph alertmanager.a for aaf0329a-1c5b-11f1-8b6f-7f2d819bb543. 2026-03-10T08:36:52.240 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 10 08:36:52 vm03 systemd[1]: Starting Ceph alertmanager.a for aaf0329a-1c5b-11f1-8b6f-7f2d819bb543... 2026-03-10T08:36:52.240 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 10 08:36:52 vm03 podman[86258]: 2026-03-10 08:36:52.203673085 +0000 UTC m=+0.017894545 volume create ef2e832c51027784a47e177a8df3bb1527b32c7730a1a716f6af31eb42392b8e 2026-03-10T08:36:52.240 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 10 08:36:52 vm03 podman[86258]: 2026-03-10 08:36:52.207187454 +0000 UTC m=+0.021408923 container create f60a6222fc42982ee65fcbbdd3de9efd0161aa5e3cfadd6f20c09eff912c67a9 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-alertmanager-a, maintainer=The Prometheus Authors ) 2026-03-10T08:36:52.240 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:52 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:52.240 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:52 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:52.240 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:52 vm03 ceph-mon[50703]: pgmap v13: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:36:52.240 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:52 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:52.240 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:52 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:52.240 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:52 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:52.240 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:52 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:36:52.241 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:52 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:36:52.241 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:52 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:52.589 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:36:52 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a[78511]: debug there is no tcmu-runner data available 2026-03-10T08:36:52.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:52 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:52.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:52 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:52.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:52 vm06 ceph-mon[54477]: pgmap v13: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:36:52.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:52 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:52.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:52 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:52.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:52 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:52.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:52 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:36:52.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:52 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:36:52.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:52 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:52.678 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 10 08:36:52 vm03 podman[86258]: 2026-03-10 08:36:52.243275881 +0000 UTC m=+0.057497340 container init f60a6222fc42982ee65fcbbdd3de9efd0161aa5e3cfadd6f20c09eff912c67a9 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-alertmanager-a, maintainer=The Prometheus Authors ) 2026-03-10T08:36:52.678 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 10 08:36:52 vm03 podman[86258]: 2026-03-10 08:36:52.245715256 +0000 UTC m=+0.059936725 container start f60a6222fc42982ee65fcbbdd3de9efd0161aa5e3cfadd6f20c09eff912c67a9 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-alertmanager-a, maintainer=The Prometheus Authors ) 2026-03-10T08:36:52.678 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 10 08:36:52 vm03 bash[86258]: f60a6222fc42982ee65fcbbdd3de9efd0161aa5e3cfadd6f20c09eff912c67a9 2026-03-10T08:36:52.679 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 10 08:36:52 vm03 podman[86258]: 2026-03-10 08:36:52.197026586 +0000 UTC m=+0.011248055 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0 2026-03-10T08:36:52.679 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 10 08:36:52 vm03 systemd[1]: Started Ceph alertmanager.a for aaf0329a-1c5b-11f1-8b6f-7f2d819bb543. 2026-03-10T08:36:52.679 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 10 08:36:52 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-alertmanager-a[86268]: ts=2026-03-10T08:36:52.268Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)" 2026-03-10T08:36:52.679 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 10 08:36:52 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-alertmanager-a[86268]: ts=2026-03-10T08:36:52.269Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)" 2026-03-10T08:36:52.679 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 10 08:36:52 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-alertmanager-a[86268]: ts=2026-03-10T08:36:52.270Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.123.103 port=9094 2026-03-10T08:36:52.679 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 10 08:36:52 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-alertmanager-a[86268]: ts=2026-03-10T08:36:52.274Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s 2026-03-10T08:36:52.679 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 10 08:36:52 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-alertmanager-a[86268]: ts=2026-03-10T08:36:52.306Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-10T08:36:52.679 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 10 08:36:52 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-alertmanager-a[86268]: ts=2026-03-10T08:36:52.306Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-10T08:36:52.679 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 10 08:36:52 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-alertmanager-a[86268]: ts=2026-03-10T08:36:52.308Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9093 2026-03-10T08:36:52.679 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 10 08:36:52 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-alertmanager-a[86268]: ts=2026-03-10T08:36:52.308Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=[::]:9093 2026-03-10T08:36:52.915 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:52 vm06 systemd[1]: Stopping Ceph prometheus.a for aaf0329a-1c5b-11f1-8b6f-7f2d819bb543... 2026-03-10T08:36:53.179 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:52 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a[79059]: ts=2026-03-10T08:36:52.913Z caller=main.go:964 level=warn msg="Received SIGTERM, exiting gracefully..." 2026-03-10T08:36:53.179 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:52 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a[79059]: ts=2026-03-10T08:36:52.913Z caller=main.go:988 level=info msg="Stopping scrape discovery manager..." 2026-03-10T08:36:53.179 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:52 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a[79059]: ts=2026-03-10T08:36:52.913Z caller=main.go:1002 level=info msg="Stopping notify discovery manager..." 2026-03-10T08:36:53.179 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:52 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a[79059]: ts=2026-03-10T08:36:52.913Z caller=manager.go:177 level=info component="rule manager" msg="Stopping rule manager..." 2026-03-10T08:36:53.179 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:52 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a[79059]: ts=2026-03-10T08:36:52.913Z caller=main.go:984 level=info msg="Scrape discovery manager stopped" 2026-03-10T08:36:53.180 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:52 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a[79059]: ts=2026-03-10T08:36:52.913Z caller=manager.go:187 level=info component="rule manager" msg="Rule manager stopped" 2026-03-10T08:36:53.180 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:52 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a[79059]: ts=2026-03-10T08:36:52.913Z caller=main.go:1039 level=info msg="Stopping scrape manager..." 2026-03-10T08:36:53.180 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:52 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a[79059]: ts=2026-03-10T08:36:52.913Z caller=main.go:998 level=info msg="Notify discovery manager stopped" 2026-03-10T08:36:53.180 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:52 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a[79059]: ts=2026-03-10T08:36:52.913Z caller=main.go:1031 level=info msg="Scrape manager stopped" 2026-03-10T08:36:53.180 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:52 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a[79059]: ts=2026-03-10T08:36:52.916Z caller=notifier.go:618 level=info component=notifier msg="Stopping notification manager..." 2026-03-10T08:36:53.180 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:52 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a[79059]: ts=2026-03-10T08:36:52.916Z caller=main.go:1261 level=info msg="Notifier manager stopped" 2026-03-10T08:36:53.180 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:52 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a[79059]: ts=2026-03-10T08:36:52.916Z caller=main.go:1273 level=info msg="See you next time!" 2026-03-10T08:36:53.180 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:52 vm06 podman[81393]: 2026-03-10 08:36:52.92601184 +0000 UTC m=+0.029859003 container died a45adbe4d96be880ddb67e2d677c9d93e1dbd627b4e221f8ac28fd9f824439e2 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a, maintainer=The Prometheus Authors ) 2026-03-10T08:36:53.180 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:52 vm06 podman[81393]: 2026-03-10 08:36:52.941032296 +0000 UTC m=+0.044879459 container remove a45adbe4d96be880ddb67e2d677c9d93e1dbd627b4e221f8ac28fd9f824439e2 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a, maintainer=The Prometheus Authors ) 2026-03-10T08:36:53.180 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:52 vm06 bash[81393]: ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a 2026-03-10T08:36:53.180 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:53 vm06 systemd[1]: ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@prometheus.a.service: Deactivated successfully. 2026-03-10T08:36:53.180 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:53 vm06 systemd[1]: Stopped Ceph prometheus.a for aaf0329a-1c5b-11f1-8b6f-7f2d819bb543. 2026-03-10T08:36:53.180 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:53 vm06 systemd[1]: Starting Ceph prometheus.a for aaf0329a-1c5b-11f1-8b6f-7f2d819bb543... 2026-03-10T08:36:53.180 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:53 vm06 podman[81463]: 2026-03-10 08:36:53.116758786 +0000 UTC m=+0.020025708 container create 888961d3f3d3c79666ab4eab9383a437cdb6c95553173c61e8b8a5371548c50c (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a, maintainer=The Prometheus Authors ) 2026-03-10T08:36:53.180 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:53 vm06 podman[81463]: 2026-03-10 08:36:53.146942866 +0000 UTC m=+0.050209798 container init 888961d3f3d3c79666ab4eab9383a437cdb6c95553173c61e8b8a5371548c50c (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a, maintainer=The Prometheus Authors ) 2026-03-10T08:36:53.180 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:53 vm06 podman[81463]: 2026-03-10 08:36:53.149688395 +0000 UTC m=+0.052955317 container start 888961d3f3d3c79666ab4eab9383a437cdb6c95553173c61e8b8a5371548c50c (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a, maintainer=The Prometheus Authors ) 2026-03-10T08:36:53.180 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:53 vm06 bash[81463]: 888961d3f3d3c79666ab4eab9383a437cdb6c95553173c61e8b8a5371548c50c 2026-03-10T08:36:53.180 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:53 vm06 podman[81463]: 2026-03-10 08:36:53.108819425 +0000 UTC m=+0.012086347 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0 2026-03-10T08:36:53.180 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:53 vm06 systemd[1]: Started Ceph prometheus.a for aaf0329a-1c5b-11f1-8b6f-7f2d819bb543. 2026-03-10T08:36:53.180 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:53 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a[81473]: ts=2026-03-10T08:36:53.177Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.0, branch=HEAD, revision=c05c15512acb675e3f6cd662a6727854e93fc024)" 2026-03-10T08:36:53.180 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:53 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a[81473]: ts=2026-03-10T08:36:53.178Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@b5723e458358, date=20240319-10:54:45, tags=netgo,builtinassets,stringlabels)" 2026-03-10T08:36:53.180 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:53 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a[81473]: ts=2026-03-10T08:36:53.178Z caller=main.go:623 level=info host_details="(Linux 5.14.0-686.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Feb 19 10:49:27 UTC 2026 x86_64 vm06 (none))" 2026-03-10T08:36:53.180 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:53 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a[81473]: ts=2026-03-10T08:36:53.178Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)" 2026-03-10T08:36:53.180 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:53 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a[81473]: ts=2026-03-10T08:36:53.178Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)" 2026-03-10T08:36:53.525 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:53 vm03 ceph-mon[50703]: Reconfiguring alertmanager.a (dependencies changed)... 2026-03-10T08:36:53.525 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:53 vm03 ceph-mon[50703]: Reconfiguring daemon alertmanager.a on vm03 2026-03-10T08:36:53.525 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:53 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:53.525 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:53 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:53.525 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:53 vm03 ceph-mon[50703]: Reconfiguring prometheus.a (dependencies changed)... 2026-03-10T08:36:53.525 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:53 vm03 ceph-mon[50703]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:36:53.525 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:53 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:53.525 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:53 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:53.525 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:53 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T08:36:53.525 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:53 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm03.local:9093"}]: dispatch 2026-03-10T08:36:53.525 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:53 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:53.525 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:53 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T08:36:53.525 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:53 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm06.local:9095"}]: dispatch 2026-03-10T08:36:53.525 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:53 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:53.525 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:53 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T08:36:53.525 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:53 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm06.local:3000"}]: dispatch 2026-03-10T08:36:53.525 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:53 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:53.525 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:53 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:36:53.526 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:53 vm03 ceph-mon[57160]: Reconfiguring alertmanager.a (dependencies changed)... 2026-03-10T08:36:53.526 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:53 vm03 ceph-mon[57160]: Reconfiguring daemon alertmanager.a on vm03 2026-03-10T08:36:53.526 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:53 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:53.526 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:53 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:53.526 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:53 vm03 ceph-mon[57160]: Reconfiguring prometheus.a (dependencies changed)... 2026-03-10T08:36:53.526 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:53 vm03 ceph-mon[57160]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:36:53.526 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:53 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:53.526 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:53 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:53.526 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:53 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T08:36:53.526 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:53 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm03.local:9093"}]: dispatch 2026-03-10T08:36:53.526 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:53 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:53.526 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:53 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T08:36:53.526 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:53 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm06.local:9095"}]: dispatch 2026-03-10T08:36:53.526 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:53 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:53.526 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:53 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T08:36:53.526 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:53 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm06.local:3000"}]: dispatch 2026-03-10T08:36:53.526 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:53 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:53.526 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:53 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:36:53.526 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:36:53 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: [10/Mar/2026:08:36:53] ENGINE Bus STOPPING 2026-03-10T08:36:53.526 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:36:53 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: [10/Mar/2026:08:36:53] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-10T08:36:53.526 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:36:53 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: [10/Mar/2026:08:36:53] ENGINE Bus STOPPED 2026-03-10T08:36:53.526 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:36:53 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: [10/Mar/2026:08:36:53] ENGINE Bus STARTING 2026-03-10T08:36:53.526 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:36:53 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: [10/Mar/2026:08:36:53] ENGINE Serving on http://:::9283 2026-03-10T08:36:53.526 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:36:53 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: [10/Mar/2026:08:36:53] ENGINE Bus STARTED 2026-03-10T08:36:53.526 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:36:53 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: [10/Mar/2026:08:36:53] ENGINE Bus STOPPING 2026-03-10T08:36:53.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:53 vm06 ceph-mon[54477]: Reconfiguring alertmanager.a (dependencies changed)... 2026-03-10T08:36:53.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:53 vm06 ceph-mon[54477]: Reconfiguring daemon alertmanager.a on vm03 2026-03-10T08:36:53.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:53 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:53.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:53 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:53.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:53 vm06 ceph-mon[54477]: Reconfiguring prometheus.a (dependencies changed)... 2026-03-10T08:36:53.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:53 vm06 ceph-mon[54477]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:36:53.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:53 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:53.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:53 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:53.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:53 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T08:36:53.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:53 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm03.local:9093"}]: dispatch 2026-03-10T08:36:53.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:53 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:53.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:53 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T08:36:53.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:53 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm06.local:9095"}]: dispatch 2026-03-10T08:36:53.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:53 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:53.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:53 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T08:36:53.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:53 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm06.local:3000"}]: dispatch 2026-03-10T08:36:53.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:53 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:53.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:53 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:36:53.590 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:53 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a[81473]: ts=2026-03-10T08:36:53.183Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=:9095 2026-03-10T08:36:53.590 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:53 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a[81473]: ts=2026-03-10T08:36:53.184Z caller=main.go:1129 level=info msg="Starting TSDB ..." 2026-03-10T08:36:53.590 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:53 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a[81473]: ts=2026-03-10T08:36:53.185Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9095 2026-03-10T08:36:53.590 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:53 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a[81473]: ts=2026-03-10T08:36:53.185Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9095 2026-03-10T08:36:53.590 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:53 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a[81473]: ts=2026-03-10T08:36:53.186Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 2026-03-10T08:36:53.590 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:53 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a[81473]: ts=2026-03-10T08:36:53.186Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=942ns 2026-03-10T08:36:53.590 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:53 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a[81473]: ts=2026-03-10T08:36:53.186Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while" 2026-03-10T08:36:53.590 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:53 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a[81473]: ts=2026-03-10T08:36:53.186Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=1 2026-03-10T08:36:53.590 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:53 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a[81473]: ts=2026-03-10T08:36:53.192Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=1 maxSegment=1 2026-03-10T08:36:53.590 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:53 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a[81473]: ts=2026-03-10T08:36:53.192Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=15.83µs wal_replay_duration=6.245721ms wbl_replay_duration=120ns total_replay_duration=6.272541ms 2026-03-10T08:36:53.590 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:53 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a[81473]: ts=2026-03-10T08:36:53.193Z caller=main.go:1150 level=info fs_type=XFS_SUPER_MAGIC 2026-03-10T08:36:53.590 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:53 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a[81473]: ts=2026-03-10T08:36:53.193Z caller=main.go:1153 level=info msg="TSDB started" 2026-03-10T08:36:53.590 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:53 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a[81473]: ts=2026-03-10T08:36:53.193Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 2026-03-10T08:36:53.590 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:53 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a[81473]: ts=2026-03-10T08:36:53.204Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=10.626211ms db_storage=842ns remote_storage=962ns web_handler=301ns query_engine=481ns scrape=615.503µs scrape_sd=75.802µs notify=9.788µs notify_sd=6.392µs rules=9.50916ms tracing=3.086µs 2026-03-10T08:36:53.590 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:53 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a[81473]: ts=2026-03-10T08:36:53.204Z caller=main.go:1114 level=info msg="Server is ready to receive web requests." 2026-03-10T08:36:53.590 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:36:53 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a[81473]: ts=2026-03-10T08:36:53.204Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..." 2026-03-10T08:36:54.275 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:36:54 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: [10/Mar/2026:08:36:54] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-10T08:36:54.275 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:36:54 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: [10/Mar/2026:08:36:54] ENGINE Bus STOPPED 2026-03-10T08:36:54.275 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:36:54 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: [10/Mar/2026:08:36:54] ENGINE Bus STARTING 2026-03-10T08:36:54.275 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:36:54 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: [10/Mar/2026:08:36:54] ENGINE Serving on http://:::9283 2026-03-10T08:36:54.275 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:36:54 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: [10/Mar/2026:08:36:54] ENGINE Bus STARTED 2026-03-10T08:36:54.275 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:36:54 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: [10/Mar/2026:08:36:54] ENGINE Bus STOPPING 2026-03-10T08:36:54.294 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:54 vm06 ceph-mon[54477]: Reconfiguring daemon prometheus.a on vm06 2026-03-10T08:36:54.294 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:54 vm06 ceph-mon[54477]: from='mon.? v1:192.168.123.106:0/355593048' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T08:36:54.294 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:54 vm06 ceph-mon[54477]: from='mon.? v1:192.168.123.106:0/355593048' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm03.local:9093"}]: dispatch 2026-03-10T08:36:54.295 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:54 vm06 ceph-mon[54477]: from='mon.? v1:192.168.123.106:0/355593048' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T08:36:54.295 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:54 vm06 ceph-mon[54477]: from='mon.? v1:192.168.123.106:0/355593048' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm06.local:9095"}]: dispatch 2026-03-10T08:36:54.295 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:54 vm06 ceph-mon[54477]: from='mon.? v1:192.168.123.106:0/355593048' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T08:36:54.295 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:54 vm06 ceph-mon[54477]: from='mon.? v1:192.168.123.106:0/355593048' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm06.local:3000"}]: dispatch 2026-03-10T08:36:54.295 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:54 vm06 ceph-mon[54477]: pgmap v14: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:36:54.295 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:54 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:54.295 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:54 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:54.642 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:36:54 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: [10/Mar/2026:08:36:54] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-10T08:36:54.642 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:36:54 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: [10/Mar/2026:08:36:54] ENGINE Bus STOPPED 2026-03-10T08:36:54.642 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:36:54 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: [10/Mar/2026:08:36:54] ENGINE Bus STARTING 2026-03-10T08:36:54.642 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 10 08:36:54 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-alertmanager-a[86268]: ts=2026-03-10T08:36:54.274Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000374571s 2026-03-10T08:36:54.643 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:54 vm03 ceph-mon[50703]: Reconfiguring daemon prometheus.a on vm06 2026-03-10T08:36:54.643 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:54 vm03 ceph-mon[50703]: from='mon.? v1:192.168.123.106:0/355593048' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T08:36:54.643 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:54 vm03 ceph-mon[50703]: from='mon.? v1:192.168.123.106:0/355593048' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm03.local:9093"}]: dispatch 2026-03-10T08:36:54.643 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:54 vm03 ceph-mon[50703]: from='mon.? v1:192.168.123.106:0/355593048' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T08:36:54.643 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:54 vm03 ceph-mon[50703]: from='mon.? v1:192.168.123.106:0/355593048' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm06.local:9095"}]: dispatch 2026-03-10T08:36:54.643 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:54 vm03 ceph-mon[50703]: from='mon.? v1:192.168.123.106:0/355593048' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T08:36:54.643 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:54 vm03 ceph-mon[50703]: from='mon.? v1:192.168.123.106:0/355593048' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm06.local:3000"}]: dispatch 2026-03-10T08:36:54.643 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:54 vm03 ceph-mon[50703]: pgmap v14: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:36:54.643 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:54 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:54.643 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:54 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:54.643 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:54 vm03 ceph-mon[57160]: Reconfiguring daemon prometheus.a on vm06 2026-03-10T08:36:54.643 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:54 vm03 ceph-mon[57160]: from='mon.? v1:192.168.123.106:0/355593048' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T08:36:54.643 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:54 vm03 ceph-mon[57160]: from='mon.? v1:192.168.123.106:0/355593048' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm03.local:9093"}]: dispatch 2026-03-10T08:36:54.643 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:54 vm03 ceph-mon[57160]: from='mon.? v1:192.168.123.106:0/355593048' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T08:36:54.643 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:54 vm03 ceph-mon[57160]: from='mon.? v1:192.168.123.106:0/355593048' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm06.local:9095"}]: dispatch 2026-03-10T08:36:54.643 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:54 vm03 ceph-mon[57160]: from='mon.? v1:192.168.123.106:0/355593048' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T08:36:54.643 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:54 vm03 ceph-mon[57160]: from='mon.? v1:192.168.123.106:0/355593048' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm06.local:3000"}]: dispatch 2026-03-10T08:36:54.643 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:54 vm03 ceph-mon[57160]: pgmap v14: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:36:54.643 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:54 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:54.643 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:54 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:54.928 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:36:54 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: [10/Mar/2026:08:36:54] ENGINE Serving on http://:::9283 2026-03-10T08:36:54.928 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:36:54 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: [10/Mar/2026:08:36:54] ENGINE Bus STARTED 2026-03-10T08:36:55.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:55 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:55.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:55 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:55.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:55 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:36:55.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:55 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:36:55.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:55 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:55.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:55 vm03 ceph-mon[57160]: pgmap v15: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:36:55.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:55 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:55.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:55 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:55.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:55 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:36:55.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:55 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:36:55.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:55 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:55.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:55 vm03 ceph-mon[50703]: pgmap v15: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:36:56.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:55 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:56.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:55 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:56.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:55 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:36:56.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:55 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:36:56.090 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:55 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:36:56.090 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:55 vm06 ceph-mon[54477]: pgmap v15: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:36:57.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:36:57 vm06 ceph-mon[54477]: pgmap v16: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:36:57.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:36:57 vm03 ceph-mon[50703]: pgmap v16: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:36:57.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:36:57 vm03 ceph-mon[57160]: pgmap v16: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:36:59.928 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:36:59 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: ::ffff:192.168.123.106 - - [10/Mar/2026:08:36:59] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T08:37:00.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:00 vm03 ceph-mon[57160]: pgmap v17: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:37:00.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:00 vm03 ceph-mon[50703]: pgmap v17: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:37:00.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:00 vm06 ceph-mon[54477]: pgmap v17: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:37:02.589 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:37:02 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a[78511]: debug there is no tcmu-runner data available 2026-03-10T08:37:02.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:02 vm06 ceph-mon[54477]: pgmap v18: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:37:02.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:02 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:37:02.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:02 vm03 ceph-mon[50703]: pgmap v18: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:37:02.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:02 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:37:02.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:02 vm03 ceph-mon[57160]: pgmap v18: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:37:02.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:02 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:37:02.678 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 10 08:37:02 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-alertmanager-a[86268]: ts=2026-03-10T08:37:02.275Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.001523919s 2026-03-10T08:37:03.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:03 vm06 ceph-mon[54477]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:37:03.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:03 vm06 ceph-mon[54477]: pgmap v19: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:37:03.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:03 vm03 ceph-mon[50703]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:37:03.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:03 vm03 ceph-mon[50703]: pgmap v19: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:37:03.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:03 vm03 ceph-mon[57160]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:37:03.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:03 vm03 ceph-mon[57160]: pgmap v19: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:37:06.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:06 vm06 ceph-mon[54477]: pgmap v20: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:37:06.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:06 vm03 ceph-mon[50703]: pgmap v20: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:37:06.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:06 vm03 ceph-mon[57160]: pgmap v20: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:37:07.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:07 vm06 ceph-mon[54477]: pgmap v21: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:37:07.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:07 vm03 ceph-mon[50703]: pgmap v21: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:37:07.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:07 vm03 ceph-mon[57160]: pgmap v21: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:37:09.928 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:37:09 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: ::ffff:192.168.123.106 - - [10/Mar/2026:08:37:09] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T08:37:10.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:10 vm03 ceph-mon[50703]: pgmap v22: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T08:37:10.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:10 vm03 ceph-mon[57160]: pgmap v22: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T08:37:10.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:10 vm06 ceph-mon[54477]: pgmap v22: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T08:37:12.589 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:37:12 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a[78511]: debug there is no tcmu-runner data available 2026-03-10T08:37:12.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:12 vm06 ceph-mon[54477]: pgmap v23: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T08:37:12.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:12 vm03 ceph-mon[50703]: pgmap v23: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T08:37:12.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:12 vm03 ceph-mon[57160]: pgmap v23: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T08:37:13.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:13 vm06 ceph-mon[54477]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:37:13.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:13 vm03 ceph-mon[57160]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:37:13.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:13 vm03 ceph-mon[50703]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:37:14.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:14 vm06 ceph-mon[54477]: pgmap v24: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:37:14.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:14 vm03 ceph-mon[50703]: pgmap v24: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:37:14.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:14 vm03 ceph-mon[57160]: pgmap v24: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:37:15.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:15 vm06 ceph-mon[54477]: pgmap v25: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T08:37:15.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:15 vm03 ceph-mon[50703]: pgmap v25: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T08:37:15.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:15 vm03 ceph-mon[57160]: pgmap v25: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T08:37:16.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:16 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:37:16.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:16 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:37:16.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:16 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:37:17.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:17 vm03 ceph-mon[50703]: pgmap v26: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:37:17.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:17 vm03 ceph-mon[57160]: pgmap v26: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:37:18.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:17 vm06 ceph-mon[54477]: pgmap v26: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:37:19.928 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:37:19 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: ::ffff:192.168.123.106 - - [10/Mar/2026:08:37:19] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T08:37:20.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:20 vm03 ceph-mon[50703]: pgmap v27: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:37:20.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:20 vm03 ceph-mon[57160]: pgmap v27: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:37:20.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:20 vm06 ceph-mon[54477]: pgmap v27: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:37:22.589 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:37:22 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a[78511]: debug there is no tcmu-runner data available 2026-03-10T08:37:22.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:22 vm06 ceph-mon[54477]: pgmap v28: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:37:22.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:22 vm03 ceph-mon[50703]: pgmap v28: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:37:22.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:22 vm03 ceph-mon[57160]: pgmap v28: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:37:23.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:23 vm03 ceph-mon[50703]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:37:23.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:23 vm03 ceph-mon[57160]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:37:23.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:23 vm06 ceph-mon[54477]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:37:24.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:24 vm03 ceph-mon[50703]: pgmap v29: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:37:24.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:24 vm03 ceph-mon[57160]: pgmap v29: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:37:24.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:24 vm06 ceph-mon[54477]: pgmap v29: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:37:26.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:26 vm06 ceph-mon[54477]: pgmap v30: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:37:26.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:26 vm03 ceph-mon[50703]: pgmap v30: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:37:26.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:26 vm03 ceph-mon[57160]: pgmap v30: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:37:28.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:28 vm03 ceph-mon[50703]: pgmap v31: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:37:28.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:28 vm03 ceph-mon[57160]: pgmap v31: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:37:28.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:28 vm06 ceph-mon[54477]: pgmap v31: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:37:29.928 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:37:29 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: ::ffff:192.168.123.106 - - [10/Mar/2026:08:37:29] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T08:37:30.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:30 vm06 ceph-mon[54477]: pgmap v32: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:37:30.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:30 vm03 ceph-mon[50703]: pgmap v32: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:37:30.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:30 vm03 ceph-mon[57160]: pgmap v32: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:37:31.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:31 vm06 ceph-mon[54477]: pgmap v33: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:37:31.840 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:31 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.10", "id": [1, 2]}]: dispatch 2026-03-10T08:37:31.840 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:31 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.13", "id": [1, 2]}]: dispatch 2026-03-10T08:37:31.840 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:31 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.1b", "id": [1, 5]}]: dispatch 2026-03-10T08:37:31.840 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:31 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.10", "id": [1, 2]}]: dispatch 2026-03-10T08:37:31.840 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:31 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.13", "id": [1, 2]}]: dispatch 2026-03-10T08:37:31.840 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:31 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.1b", "id": [1, 5]}]: dispatch 2026-03-10T08:37:31.840 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:31 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:37:31.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:31 vm03 ceph-mon[57160]: pgmap v33: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:37:31.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:31 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.10", "id": [1, 2]}]: dispatch 2026-03-10T08:37:31.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:31 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.13", "id": [1, 2]}]: dispatch 2026-03-10T08:37:31.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:31 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.1b", "id": [1, 5]}]: dispatch 2026-03-10T08:37:31.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:31 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.10", "id": [1, 2]}]: dispatch 2026-03-10T08:37:31.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:31 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.13", "id": [1, 2]}]: dispatch 2026-03-10T08:37:31.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:31 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.1b", "id": [1, 5]}]: dispatch 2026-03-10T08:37:31.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:31 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:37:31.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:31 vm03 ceph-mon[50703]: pgmap v33: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:37:31.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:31 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.10", "id": [1, 2]}]: dispatch 2026-03-10T08:37:31.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:31 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.13", "id": [1, 2]}]: dispatch 2026-03-10T08:37:31.929 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:31 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.1b", "id": [1, 5]}]: dispatch 2026-03-10T08:37:31.929 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:31 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.10", "id": [1, 2]}]: dispatch 2026-03-10T08:37:31.929 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:31 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.13", "id": [1, 2]}]: dispatch 2026-03-10T08:37:31.929 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:31 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.1b", "id": [1, 5]}]: dispatch 2026-03-10T08:37:31.929 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:31 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:37:32.589 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:37:32 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a[78511]: debug there is no tcmu-runner data available 2026-03-10T08:37:32.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:32 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.10", "id": [1, 2]}]': finished 2026-03-10T08:37:32.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:32 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.13", "id": [1, 2]}]': finished 2026-03-10T08:37:32.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:32 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.1b", "id": [1, 5]}]': finished 2026-03-10T08:37:32.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:32 vm06 ceph-mon[54477]: osdmap e57: 8 total, 8 up, 8 in 2026-03-10T08:37:32.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:32 vm06 ceph-mon[54477]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:37:32.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:32 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.10", "id": [1, 2]}]': finished 2026-03-10T08:37:32.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:32 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.13", "id": [1, 2]}]': finished 2026-03-10T08:37:32.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:32 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.1b", "id": [1, 5]}]': finished 2026-03-10T08:37:32.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:32 vm03 ceph-mon[50703]: osdmap e57: 8 total, 8 up, 8 in 2026-03-10T08:37:32.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:32 vm03 ceph-mon[50703]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:37:32.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:32 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.10", "id": [1, 2]}]': finished 2026-03-10T08:37:32.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:32 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.13", "id": [1, 2]}]': finished 2026-03-10T08:37:32.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:32 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.1b", "id": [1, 5]}]': finished 2026-03-10T08:37:32.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:32 vm03 ceph-mon[57160]: osdmap e57: 8 total, 8 up, 8 in 2026-03-10T08:37:32.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:32 vm03 ceph-mon[57160]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:37:34.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:33 vm06 ceph-mon[54477]: osdmap e58: 8 total, 8 up, 8 in 2026-03-10T08:37:34.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:33 vm06 ceph-mon[54477]: pgmap v36: 132 pgs: 3 peering, 129 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:37:34.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:33 vm03 ceph-mon[50703]: osdmap e58: 8 total, 8 up, 8 in 2026-03-10T08:37:34.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:33 vm03 ceph-mon[50703]: pgmap v36: 132 pgs: 3 peering, 129 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:37:34.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:33 vm03 ceph-mon[57160]: osdmap e58: 8 total, 8 up, 8 in 2026-03-10T08:37:34.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:33 vm03 ceph-mon[57160]: pgmap v36: 132 pgs: 3 peering, 129 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:37:35.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:34 vm06 ceph-mon[54477]: Health check failed: Reduced data availability: 3 pgs peering (PG_AVAILABILITY) 2026-03-10T08:37:35.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:34 vm03 ceph-mon[50703]: Health check failed: Reduced data availability: 3 pgs peering (PG_AVAILABILITY) 2026-03-10T08:37:35.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:34 vm03 ceph-mon[57160]: Health check failed: Reduced data availability: 3 pgs peering (PG_AVAILABILITY) 2026-03-10T08:37:36.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:35 vm06 ceph-mon[54477]: pgmap v37: 132 pgs: 3 peering, 129 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T08:37:36.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:35 vm03 ceph-mon[50703]: pgmap v37: 132 pgs: 3 peering, 129 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T08:37:36.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:35 vm03 ceph-mon[57160]: pgmap v37: 132 pgs: 3 peering, 129 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T08:37:37.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:37 vm06 ceph-mon[54477]: Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 3 pgs peering) 2026-03-10T08:37:37.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:37 vm06 ceph-mon[54477]: Cluster is now healthy 2026-03-10T08:37:37.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:37 vm03 ceph-mon[50703]: Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 3 pgs peering) 2026-03-10T08:37:37.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:37 vm03 ceph-mon[50703]: Cluster is now healthy 2026-03-10T08:37:37.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:37 vm03 ceph-mon[57160]: Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 3 pgs peering) 2026-03-10T08:37:37.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:37 vm03 ceph-mon[57160]: Cluster is now healthy 2026-03-10T08:37:38.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:38 vm06 ceph-mon[54477]: pgmap v38: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 27 B/s, 1 objects/s recovering 2026-03-10T08:37:38.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:38 vm03 ceph-mon[50703]: pgmap v38: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 27 B/s, 1 objects/s recovering 2026-03-10T08:37:38.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:38 vm03 ceph-mon[57160]: pgmap v38: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 27 B/s, 1 objects/s recovering 2026-03-10T08:37:39.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:39 vm06 ceph-mon[54477]: pgmap v39: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 27 B/s, 1 objects/s recovering 2026-03-10T08:37:39.928 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:37:39 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: ::ffff:192.168.123.106 - - [10/Mar/2026:08:37:39] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T08:37:39.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:39 vm03 ceph-mon[57160]: pgmap v39: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 27 B/s, 1 objects/s recovering 2026-03-10T08:37:39.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:39 vm03 ceph-mon[50703]: pgmap v39: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 27 B/s, 1 objects/s recovering 2026-03-10T08:37:42.589 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:37:42 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a[78511]: debug there is no tcmu-runner data available 2026-03-10T08:37:42.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:42 vm06 ceph-mon[54477]: pgmap v40: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s; 22 B/s, 0 objects/s recovering 2026-03-10T08:37:42.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:42 vm03 ceph-mon[50703]: pgmap v40: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s; 22 B/s, 0 objects/s recovering 2026-03-10T08:37:42.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:42 vm03 ceph-mon[57160]: pgmap v40: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s; 22 B/s, 0 objects/s recovering 2026-03-10T08:37:43.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:43 vm03 ceph-mon[50703]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:37:43.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:43 vm03 ceph-mon[57160]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:37:43.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:43 vm06 ceph-mon[54477]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:37:44.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:44 vm03 ceph-mon[50703]: pgmap v41: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 952 B/s rd, 0 op/s; 20 B/s, 0 objects/s recovering 2026-03-10T08:37:44.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:44 vm03 ceph-mon[57160]: pgmap v41: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 952 B/s rd, 0 op/s; 20 B/s, 0 objects/s recovering 2026-03-10T08:37:44.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:44 vm06 ceph-mon[54477]: pgmap v41: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 952 B/s rd, 0 op/s; 20 B/s, 0 objects/s recovering 2026-03-10T08:37:46.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:46 vm03 ceph-mon[50703]: pgmap v42: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering 2026-03-10T08:37:46.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:46 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:37:46.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:46 vm03 ceph-mon[57160]: pgmap v42: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering 2026-03-10T08:37:46.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:46 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:37:46.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:46 vm06 ceph-mon[54477]: pgmap v42: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering 2026-03-10T08:37:46.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:46 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:37:48.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:48 vm03 ceph-mon[50703]: pgmap v43: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 18 B/s, 0 objects/s recovering 2026-03-10T08:37:48.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:48 vm03 ceph-mon[57160]: pgmap v43: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 18 B/s, 0 objects/s recovering 2026-03-10T08:37:48.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:48 vm06 ceph-mon[54477]: pgmap v43: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 18 B/s, 0 objects/s recovering 2026-03-10T08:37:49.928 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:37:49 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: ::ffff:192.168.123.106 - - [10/Mar/2026:08:37:49] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T08:37:50.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:50 vm06 ceph-mon[54477]: pgmap v44: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:37:50.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:50 vm03 ceph-mon[50703]: pgmap v44: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:37:50.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:50 vm03 ceph-mon[57160]: pgmap v44: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:37:52.589 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:37:52 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a[78511]: debug there is no tcmu-runner data available 2026-03-10T08:37:52.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:52 vm06 ceph-mon[54477]: pgmap v45: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:37:52.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:52 vm03 ceph-mon[57160]: pgmap v45: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:37:52.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:52 vm03 ceph-mon[50703]: pgmap v45: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T08:37:53.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:53 vm06 ceph-mon[54477]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:37:53.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:53 vm06 ceph-mon[54477]: pgmap v46: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:37:53.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:53 vm03 ceph-mon[50703]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:37:53.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:53 vm03 ceph-mon[50703]: pgmap v46: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:37:53.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:53 vm03 ceph-mon[57160]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:37:53.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:53 vm03 ceph-mon[57160]: pgmap v46: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:37:54.008 INFO:tasks.workunit.client.0.vm03.stderr:Note: switching to '75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b'. 2026-03-10T08:37:54.008 INFO:tasks.workunit.client.0.vm03.stderr: 2026-03-10T08:37:54.008 INFO:tasks.workunit.client.0.vm03.stderr:You are in 'detached HEAD' state. You can look around, make experimental 2026-03-10T08:37:54.008 INFO:tasks.workunit.client.0.vm03.stderr:changes and commit them, and you can discard any commits you make in this 2026-03-10T08:37:54.008 INFO:tasks.workunit.client.0.vm03.stderr:state without impacting any branches by switching back to a branch. 2026-03-10T08:37:54.008 INFO:tasks.workunit.client.0.vm03.stderr: 2026-03-10T08:37:54.008 INFO:tasks.workunit.client.0.vm03.stderr:If you want to create a new branch to retain commits you create, you may 2026-03-10T08:37:54.008 INFO:tasks.workunit.client.0.vm03.stderr:do so (now or later) by using -c with the switch command. Example: 2026-03-10T08:37:54.008 INFO:tasks.workunit.client.0.vm03.stderr: 2026-03-10T08:37:54.008 INFO:tasks.workunit.client.0.vm03.stderr: git switch -c 2026-03-10T08:37:54.008 INFO:tasks.workunit.client.0.vm03.stderr: 2026-03-10T08:37:54.008 INFO:tasks.workunit.client.0.vm03.stderr:Or undo this operation with: 2026-03-10T08:37:54.008 INFO:tasks.workunit.client.0.vm03.stderr: 2026-03-10T08:37:54.008 INFO:tasks.workunit.client.0.vm03.stderr: git switch - 2026-03-10T08:37:54.008 INFO:tasks.workunit.client.0.vm03.stderr: 2026-03-10T08:37:54.008 INFO:tasks.workunit.client.0.vm03.stderr:Turn off this advice by setting config variable advice.detachedHead to false 2026-03-10T08:37:54.009 INFO:tasks.workunit.client.0.vm03.stderr: 2026-03-10T08:37:54.009 INFO:tasks.workunit.client.0.vm03.stderr:HEAD is now at 75a68fd8ca3 qa/suites/orch/cephadm/osds: drop nvme_loop task 2026-03-10T08:37:54.014 DEBUG:teuthology.orchestra.run.vm03:> cd -- /home/ubuntu/cephtest/clone.client.0/qa/workunits && if test -e Makefile ; then make ; fi && find -executable -type f -printf '%P\0' >/home/ubuntu/cephtest/workunits.list.client.0 2026-03-10T08:37:54.068 INFO:tasks.workunit.client.0.vm03.stdout:for d in direct_io fs ; do ( cd $d ; make all ) ; done 2026-03-10T08:37:54.070 INFO:tasks.workunit.client.0.vm03.stdout:make[1]: Entering directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/direct_io' 2026-03-10T08:37:54.070 INFO:tasks.workunit.client.0.vm03.stdout:cc -Wall -Wextra -D_GNU_SOURCE direct_io_test.c -o direct_io_test 2026-03-10T08:37:54.111 INFO:tasks.workunit.client.0.vm03.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_sync_io.c -o test_sync_io 2026-03-10T08:37:54.146 INFO:tasks.workunit.client.0.vm03.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_short_dio_read.c -o test_short_dio_read 2026-03-10T08:37:54.175 INFO:tasks.workunit.client.0.vm03.stdout:make[1]: Leaving directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/direct_io' 2026-03-10T08:37:54.176 INFO:tasks.workunit.client.0.vm03.stdout:make[1]: Entering directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/fs' 2026-03-10T08:37:54.176 INFO:tasks.workunit.client.0.vm03.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_o_trunc.c -o test_o_trunc 2026-03-10T08:37:54.206 INFO:tasks.workunit.client.0.vm03.stdout:make[1]: Leaving directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/fs' 2026-03-10T08:37:54.208 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-10T08:37:54.208 DEBUG:teuthology.orchestra.run.vm03:> dd if=/home/ubuntu/cephtest/workunits.list.client.0 of=/dev/stdout 2026-03-10T08:37:54.264 INFO:tasks.workunit:Running workunits matching rados/test_python.sh on client.0... 2026-03-10T08:37:54.265 INFO:tasks.workunit:Running workunit rados/test_python.sh... 2026-03-10T08:37:54.265 DEBUG:teuthology.orchestra.run.vm03:workunit test rados/test_python.sh> mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 1h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_python.sh 2026-03-10T08:37:54.322 INFO:tasks.workunit.client.0.vm03.stderr:+ ceph osd pool create rbd 2026-03-10T08:37:54.582 INFO:tasks.workunit.client.0.vm03.stderr:pool 'rbd' already exists 2026-03-10T08:37:54.592 INFO:tasks.workunit.client.0.vm03.stderr:++ dirname /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_python.sh 2026-03-10T08:37:54.592 INFO:tasks.workunit.client.0.vm03.stderr:+ python3 -m pytest -v /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/../../../src/test/pybind/test_rados.py 2026-03-10T08:37:54.675 INFO:tasks.workunit.client.0.vm03.stdout:============================= test session starts ============================== 2026-03-10T08:37:54.675 INFO:tasks.workunit.client.0.vm03.stdout:platform linux -- Python 3.9.25, pytest-6.2.2, py-1.10.0, pluggy-0.13.1 -- /usr/bin/python3 2026-03-10T08:37:54.676 INFO:tasks.workunit.client.0.vm03.stdout:cachedir: .pytest_cache 2026-03-10T08:37:54.676 INFO:tasks.workunit.client.0.vm03.stdout:rootdir: /home/ubuntu/cephtest/clone.client.0/src/test/pybind, configfile: pytest.ini 2026-03-10T08:37:54.828 INFO:tasks.workunit.client.0.vm03.stdout:collecting ... collected 91 items 2026-03-10T08:37:54.828 INFO:tasks.workunit.client.0.vm03.stdout: 2026-03-10T08:37:54.833 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::test_rados_init_error PASSED [ 1%] 2026-03-10T08:37:54.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:54 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/636375556' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-10T08:37:54.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:54 vm06 ceph-mon[54477]: from='client.24617 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-10T08:37:54.878 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::test_rados_init PASSED [ 2%] 2026-03-10T08:37:54.899 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::test_ioctx_context_manager PASSED [ 3%] 2026-03-10T08:37:54.904 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::test_parse_argv PASSED [ 4%] 2026-03-10T08:37:54.908 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::test_parse_argv_empty_str PASSED [ 5%] 2026-03-10T08:37:54.915 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRadosStateError::test_configuring PASSED [ 6%] 2026-03-10T08:37:54.922 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRadosStateError::test_connected PASSED [ 7%] 2026-03-10T08:37:54.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:54 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/636375556' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-10T08:37:54.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:54 vm03 ceph-mon[50703]: from='client.24617 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-10T08:37:54.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:54 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/636375556' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-10T08:37:54.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:54 vm03 ceph-mon[57160]: from='client.24617 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-10T08:37:54.941 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRadosStateError::test_shutdown PASSED [ 8%] 2026-03-10T08:37:54.964 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRados::test_ping_monitor PASSED [ 9%] 2026-03-10T08:37:54.991 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRados::test_annotations PASSED [ 10%] 2026-03-10T08:37:55.641 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:55 vm06 ceph-mon[54477]: from='client.24617 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "rbd"}]': finished 2026-03-10T08:37:55.641 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:55 vm06 ceph-mon[54477]: osdmap e59: 8 total, 8 up, 8 in 2026-03-10T08:37:55.641 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:55 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/636375556' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-10T08:37:55.641 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:55 vm06 ceph-mon[54477]: from='client.24617 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-10T08:37:55.641 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:55 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:37:55.641 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:55 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/3325031766' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T08:37:55.641 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:55 vm06 ceph-mon[54477]: pgmap v48: 164 pgs: 32 unknown, 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T08:37:55.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:55 vm03 ceph-mon[57160]: from='client.24617 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "rbd"}]': finished 2026-03-10T08:37:55.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:55 vm03 ceph-mon[57160]: osdmap e59: 8 total, 8 up, 8 in 2026-03-10T08:37:55.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:55 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/636375556' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-10T08:37:55.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:55 vm03 ceph-mon[57160]: from='client.24617 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-10T08:37:55.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:55 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:37:55.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:55 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/3325031766' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T08:37:55.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:55 vm03 ceph-mon[57160]: pgmap v48: 164 pgs: 32 unknown, 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T08:37:55.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:55 vm03 ceph-mon[50703]: from='client.24617 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "rbd"}]': finished 2026-03-10T08:37:55.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:55 vm03 ceph-mon[50703]: osdmap e59: 8 total, 8 up, 8 in 2026-03-10T08:37:55.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:55 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/636375556' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-10T08:37:55.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:55 vm03 ceph-mon[50703]: from='client.24617 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-10T08:37:55.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:55 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:37:55.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:55 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/3325031766' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T08:37:55.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:55 vm03 ceph-mon[50703]: pgmap v48: 164 pgs: 32 unknown, 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T08:37:56.527 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRados::test_create PASSED [ 12%] 2026-03-10T08:37:56.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:56 vm06 ceph-mon[54477]: osdmap e60: 8 total, 8 up, 8 in 2026-03-10T08:37:56.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:56 vm06 ceph-mon[54477]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:37:56.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:56 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:37:56.840 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:56 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:37:56.840 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:56 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:37:56.840 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:56 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:37:56.840 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:56 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:37:56.840 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:56 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:37:56.840 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:56 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:37:56.840 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:56 vm06 ceph-mon[54477]: osdmap e61: 8 total, 8 up, 8 in 2026-03-10T08:37:56.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:56 vm03 ceph-mon[57160]: osdmap e60: 8 total, 8 up, 8 in 2026-03-10T08:37:56.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:56 vm03 ceph-mon[57160]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:37:56.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:56 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:37:56.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:56 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:37:56.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:56 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:37:56.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:56 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:37:56.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:56 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:37:56.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:56 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:37:56.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:56 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:37:56.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:56 vm03 ceph-mon[57160]: osdmap e61: 8 total, 8 up, 8 in 2026-03-10T08:37:56.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:56 vm03 ceph-mon[50703]: osdmap e60: 8 total, 8 up, 8 in 2026-03-10T08:37:56.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:56 vm03 ceph-mon[50703]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:37:56.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:56 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:37:56.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:56 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:37:56.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:56 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:37:56.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:56 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:37:56.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:56 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:37:56.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:56 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:37:56.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:56 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:37:56.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:56 vm03 ceph-mon[50703]: osdmap e61: 8 total, 8 up, 8 in 2026-03-10T08:37:57.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:57 vm06 ceph-mon[54477]: pgmap v51: 164 pgs: 164 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T08:37:57.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:57 vm06 ceph-mon[54477]: osdmap e62: 8 total, 8 up, 8 in 2026-03-10T08:37:57.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:57 vm03 ceph-mon[57160]: pgmap v51: 164 pgs: 164 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T08:37:57.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:57 vm03 ceph-mon[57160]: osdmap e62: 8 total, 8 up, 8 in 2026-03-10T08:37:57.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:57 vm03 ceph-mon[50703]: pgmap v51: 164 pgs: 164 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T08:37:57.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:57 vm03 ceph-mon[50703]: osdmap e62: 8 total, 8 up, 8 in 2026-03-10T08:37:58.541 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRados::test_create_utf8 PASSED [ 13%] 2026-03-10T08:37:59.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:59 vm06 ceph-mon[54477]: osdmap e63: 8 total, 8 up, 8 in 2026-03-10T08:37:59.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:37:59 vm06 ceph-mon[54477]: pgmap v54: 164 pgs: 164 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:37:59.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:59 vm03 ceph-mon[57160]: osdmap e63: 8 total, 8 up, 8 in 2026-03-10T08:37:59.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:37:59 vm03 ceph-mon[57160]: pgmap v54: 164 pgs: 164 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:37:59.928 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:37:59 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: ::ffff:192.168.123.106 - - [10/Mar/2026:08:37:59] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T08:37:59.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:59 vm03 ceph-mon[50703]: osdmap e63: 8 total, 8 up, 8 in 2026-03-10T08:37:59.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:37:59 vm03 ceph-mon[50703]: pgmap v54: 164 pgs: 164 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:38:00.579 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRados::test_pool_lookup_utf8 PASSED [ 14%] 2026-03-10T08:38:00.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:00 vm06 ceph-mon[54477]: osdmap e64: 8 total, 8 up, 8 in 2026-03-10T08:38:00.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:00 vm03 ceph-mon[57160]: osdmap e64: 8 total, 8 up, 8 in 2026-03-10T08:38:00.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:00 vm03 ceph-mon[50703]: osdmap e64: 8 total, 8 up, 8 in 2026-03-10T08:38:01.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:01 vm06 ceph-mon[54477]: osdmap e65: 8 total, 8 up, 8 in 2026-03-10T08:38:01.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:01 vm06 ceph-mon[54477]: pgmap v57: 164 pgs: 164 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:38:01.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:01 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:38:01.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:01 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:38:01.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:01 vm03 ceph-mon[57160]: osdmap e65: 8 total, 8 up, 8 in 2026-03-10T08:38:01.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:01 vm03 ceph-mon[57160]: pgmap v57: 164 pgs: 164 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:38:01.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:01 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:38:01.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:01 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:38:01.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:01 vm03 ceph-mon[50703]: osdmap e65: 8 total, 8 up, 8 in 2026-03-10T08:38:01.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:01 vm03 ceph-mon[50703]: pgmap v57: 164 pgs: 164 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:38:01.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:01 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:38:01.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:01 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:38:02.589 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRados::test_eexist PASSED [ 15%] 2026-03-10T08:38:02.606 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:38:02 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a[78511]: debug there is no tcmu-runner data available 2026-03-10T08:38:02.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:02 vm03 ceph-mon[57160]: osdmap e66: 8 total, 8 up, 8 in 2026-03-10T08:38:02.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:02 vm03 ceph-mon[57160]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:38:02.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:02 vm03 ceph-mon[50703]: osdmap e66: 8 total, 8 up, 8 in 2026-03-10T08:38:02.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:02 vm03 ceph-mon[50703]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:38:03.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:02 vm06 ceph-mon[54477]: osdmap e66: 8 total, 8 up, 8 in 2026-03-10T08:38:03.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:02 vm06 ceph-mon[54477]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:38:03.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:03 vm03 ceph-mon[57160]: osdmap e67: 8 total, 8 up, 8 in 2026-03-10T08:38:03.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:03 vm03 ceph-mon[57160]: pgmap v60: 164 pgs: 164 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:38:03.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:03 vm03 ceph-mon[50703]: osdmap e67: 8 total, 8 up, 8 in 2026-03-10T08:38:03.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:03 vm03 ceph-mon[50703]: pgmap v60: 164 pgs: 164 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:38:04.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:03 vm06 ceph-mon[54477]: osdmap e67: 8 total, 8 up, 8 in 2026-03-10T08:38:04.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:03 vm06 ceph-mon[54477]: pgmap v60: 164 pgs: 164 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:38:04.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:04 vm03 ceph-mon[57160]: osdmap e68: 8 total, 8 up, 8 in 2026-03-10T08:38:04.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:04 vm03 ceph-mon[50703]: osdmap e68: 8 total, 8 up, 8 in 2026-03-10T08:38:05.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:04 vm06 ceph-mon[54477]: osdmap e68: 8 total, 8 up, 8 in 2026-03-10T08:38:06.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:05 vm06 ceph-mon[54477]: osdmap e69: 8 total, 8 up, 8 in 2026-03-10T08:38:06.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:05 vm06 ceph-mon[54477]: pgmap v63: 228 pgs: 64 unknown, 164 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:38:06.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:05 vm03 ceph-mon[57160]: osdmap e69: 8 total, 8 up, 8 in 2026-03-10T08:38:06.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:05 vm03 ceph-mon[57160]: pgmap v63: 228 pgs: 64 unknown, 164 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:38:06.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:05 vm03 ceph-mon[50703]: osdmap e69: 8 total, 8 up, 8 in 2026-03-10T08:38:06.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:05 vm03 ceph-mon[50703]: pgmap v63: 228 pgs: 64 unknown, 164 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:38:07.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:06 vm06 ceph-mon[54477]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:38:07.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:06 vm06 ceph-mon[54477]: osdmap e70: 8 total, 8 up, 8 in 2026-03-10T08:38:07.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:06 vm06 ceph-mon[54477]: osdmap e71: 8 total, 8 up, 8 in 2026-03-10T08:38:07.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:06 vm03 ceph-mon[57160]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:38:07.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:06 vm03 ceph-mon[57160]: osdmap e70: 8 total, 8 up, 8 in 2026-03-10T08:38:07.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:06 vm03 ceph-mon[57160]: osdmap e71: 8 total, 8 up, 8 in 2026-03-10T08:38:07.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:06 vm03 ceph-mon[50703]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:38:07.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:06 vm03 ceph-mon[50703]: osdmap e70: 8 total, 8 up, 8 in 2026-03-10T08:38:07.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:06 vm03 ceph-mon[50703]: osdmap e71: 8 total, 8 up, 8 in 2026-03-10T08:38:08.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:07 vm06 ceph-mon[54477]: pgmap v66: 228 pgs: 228 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:38:08.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:07 vm06 ceph-mon[54477]: osdmap e72: 8 total, 8 up, 8 in 2026-03-10T08:38:08.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:07 vm03 ceph-mon[57160]: pgmap v66: 228 pgs: 228 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:38:08.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:07 vm03 ceph-mon[57160]: osdmap e72: 8 total, 8 up, 8 in 2026-03-10T08:38:08.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:07 vm03 ceph-mon[50703]: pgmap v66: 228 pgs: 228 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:38:08.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:07 vm03 ceph-mon[50703]: osdmap e72: 8 total, 8 up, 8 in 2026-03-10T08:38:09.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:09 vm03 ceph-mon[57160]: osdmap e73: 8 total, 8 up, 8 in 2026-03-10T08:38:09.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:09 vm03 ceph-mon[57160]: pgmap v69: 164 pgs: 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:38:09.928 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:38:09 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: ::ffff:192.168.123.106 - - [10/Mar/2026:08:38:09] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T08:38:09.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:09 vm03 ceph-mon[50703]: osdmap e73: 8 total, 8 up, 8 in 2026-03-10T08:38:09.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:09 vm03 ceph-mon[50703]: pgmap v69: 164 pgs: 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:38:10.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:09 vm06 ceph-mon[54477]: osdmap e73: 8 total, 8 up, 8 in 2026-03-10T08:38:10.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:09 vm06 ceph-mon[54477]: pgmap v69: 164 pgs: 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:38:10.760 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRados::test_list_pools PASSED [ 16%] 2026-03-10T08:38:11.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:10 vm06 ceph-mon[54477]: osdmap e74: 8 total, 8 up, 8 in 2026-03-10T08:38:11.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:10 vm03 ceph-mon[57160]: osdmap e74: 8 total, 8 up, 8 in 2026-03-10T08:38:11.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:10 vm03 ceph-mon[50703]: osdmap e74: 8 total, 8 up, 8 in 2026-03-10T08:38:12.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:11 vm06 ceph-mon[54477]: osdmap e75: 8 total, 8 up, 8 in 2026-03-10T08:38:12.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:11 vm06 ceph-mon[54477]: pgmap v72: 164 pgs: 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:38:12.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:11 vm06 ceph-mon[54477]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:38:12.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:11 vm03 ceph-mon[57160]: osdmap e75: 8 total, 8 up, 8 in 2026-03-10T08:38:12.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:11 vm03 ceph-mon[57160]: pgmap v72: 164 pgs: 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:38:12.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:11 vm03 ceph-mon[57160]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:38:12.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:11 vm03 ceph-mon[50703]: osdmap e75: 8 total, 8 up, 8 in 2026-03-10T08:38:12.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:11 vm03 ceph-mon[50703]: pgmap v72: 164 pgs: 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:38:12.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:11 vm03 ceph-mon[50703]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:38:12.784 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:38:12 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a[78511]: debug there is no tcmu-runner data available 2026-03-10T08:38:13.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:12 vm06 ceph-mon[54477]: osdmap e76: 8 total, 8 up, 8 in 2026-03-10T08:38:13.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:12 vm06 ceph-mon[54477]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:38:13.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:12 vm03 ceph-mon[57160]: osdmap e76: 8 total, 8 up, 8 in 2026-03-10T08:38:13.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:12 vm03 ceph-mon[57160]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:38:13.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:12 vm03 ceph-mon[50703]: osdmap e76: 8 total, 8 up, 8 in 2026-03-10T08:38:13.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:12 vm03 ceph-mon[50703]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:38:14.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:13 vm06 ceph-mon[54477]: osdmap e77: 8 total, 8 up, 8 in 2026-03-10T08:38:14.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:13 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/1086143262' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "foo", "tierpool": "foo-cache", "force_nonempty": ""}]: dispatch 2026-03-10T08:38:14.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:13 vm06 ceph-mon[54477]: from='client.24703 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "foo", "tierpool": "foo-cache", "force_nonempty": ""}]: dispatch 2026-03-10T08:38:14.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:13 vm06 ceph-mon[54477]: pgmap v75: 228 pgs: 32 unknown, 32 creating+peering, 164 active+clean; 455 KiB data, 222 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:38:14.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:13 vm03 ceph-mon[57160]: osdmap e77: 8 total, 8 up, 8 in 2026-03-10T08:38:14.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:13 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/1086143262' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "foo", "tierpool": "foo-cache", "force_nonempty": ""}]: dispatch 2026-03-10T08:38:14.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:13 vm03 ceph-mon[57160]: from='client.24703 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "foo", "tierpool": "foo-cache", "force_nonempty": ""}]: dispatch 2026-03-10T08:38:14.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:13 vm03 ceph-mon[57160]: pgmap v75: 228 pgs: 32 unknown, 32 creating+peering, 164 active+clean; 455 KiB data, 222 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:38:14.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:13 vm03 ceph-mon[50703]: osdmap e77: 8 total, 8 up, 8 in 2026-03-10T08:38:14.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:13 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/1086143262' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "foo", "tierpool": "foo-cache", "force_nonempty": ""}]: dispatch 2026-03-10T08:38:14.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:13 vm03 ceph-mon[50703]: from='client.24703 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "foo", "tierpool": "foo-cache", "force_nonempty": ""}]: dispatch 2026-03-10T08:38:14.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:13 vm03 ceph-mon[50703]: pgmap v75: 228 pgs: 32 unknown, 32 creating+peering, 164 active+clean; 455 KiB data, 222 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:38:15.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:15 vm06 ceph-mon[54477]: from='client.24703 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "foo", "tierpool": "foo-cache", "force_nonempty": ""}]': finished 2026-03-10T08:38:15.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:15 vm06 ceph-mon[54477]: osdmap e78: 8 total, 8 up, 8 in 2026-03-10T08:38:15.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:15 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/1086143262' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "foo-cache", "tierpool": "foo-cache", "mode": "readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T08:38:15.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:15 vm06 ceph-mon[54477]: from='client.24703 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "foo-cache", "tierpool": "foo-cache", "mode": "readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T08:38:15.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:15 vm03 ceph-mon[57160]: from='client.24703 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "foo", "tierpool": "foo-cache", "force_nonempty": ""}]': finished 2026-03-10T08:38:15.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:15 vm03 ceph-mon[57160]: osdmap e78: 8 total, 8 up, 8 in 2026-03-10T08:38:15.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:15 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/1086143262' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "foo-cache", "tierpool": "foo-cache", "mode": "readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T08:38:15.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:15 vm03 ceph-mon[57160]: from='client.24703 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "foo-cache", "tierpool": "foo-cache", "mode": "readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T08:38:15.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:15 vm03 ceph-mon[50703]: from='client.24703 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "foo", "tierpool": "foo-cache", "force_nonempty": ""}]': finished 2026-03-10T08:38:15.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:15 vm03 ceph-mon[50703]: osdmap e78: 8 total, 8 up, 8 in 2026-03-10T08:38:15.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:15 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/1086143262' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "foo-cache", "tierpool": "foo-cache", "mode": "readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T08:38:15.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:15 vm03 ceph-mon[50703]: from='client.24703 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "foo-cache", "tierpool": "foo-cache", "mode": "readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T08:38:16.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:16 vm06 ceph-mon[54477]: from='client.24703 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "foo-cache", "tierpool": "foo-cache", "mode": "readonly", "yes_i_really_mean_it": true}]': finished 2026-03-10T08:38:16.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:16 vm06 ceph-mon[54477]: osdmap e79: 8 total, 8 up, 8 in 2026-03-10T08:38:16.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:16 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/1086143262' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "foo", "tierpool": "foo-cache"}]: dispatch 2026-03-10T08:38:16.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:16 vm06 ceph-mon[54477]: from='client.24703 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "foo", "tierpool": "foo-cache"}]: dispatch 2026-03-10T08:38:16.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:16 vm06 ceph-mon[54477]: pgmap v78: 228 pgs: 32 unknown, 32 creating+peering, 164 active+clean; 455 KiB data, 222 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:38:16.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:16 vm03 ceph-mon[57160]: from='client.24703 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "foo-cache", "tierpool": "foo-cache", "mode": "readonly", "yes_i_really_mean_it": true}]': finished 2026-03-10T08:38:16.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:16 vm03 ceph-mon[57160]: osdmap e79: 8 total, 8 up, 8 in 2026-03-10T08:38:16.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:16 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/1086143262' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "foo", "tierpool": "foo-cache"}]: dispatch 2026-03-10T08:38:16.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:16 vm03 ceph-mon[57160]: from='client.24703 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "foo", "tierpool": "foo-cache"}]: dispatch 2026-03-10T08:38:16.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:16 vm03 ceph-mon[57160]: pgmap v78: 228 pgs: 32 unknown, 32 creating+peering, 164 active+clean; 455 KiB data, 222 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:38:16.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:16 vm03 ceph-mon[50703]: from='client.24703 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "foo-cache", "tierpool": "foo-cache", "mode": "readonly", "yes_i_really_mean_it": true}]': finished 2026-03-10T08:38:16.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:16 vm03 ceph-mon[50703]: osdmap e79: 8 total, 8 up, 8 in 2026-03-10T08:38:16.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:16 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/1086143262' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "foo", "tierpool": "foo-cache"}]: dispatch 2026-03-10T08:38:16.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:16 vm03 ceph-mon[50703]: from='client.24703 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "foo", "tierpool": "foo-cache"}]: dispatch 2026-03-10T08:38:16.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:16 vm03 ceph-mon[50703]: pgmap v78: 228 pgs: 32 unknown, 32 creating+peering, 164 active+clean; 455 KiB data, 222 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:38:17.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:17 vm06 ceph-mon[54477]: from='client.24703 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "foo", "tierpool": "foo-cache"}]': finished 2026-03-10T08:38:17.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:17 vm06 ceph-mon[54477]: osdmap e80: 8 total, 8 up, 8 in 2026-03-10T08:38:17.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:17 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:38:17.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:17 vm06 ceph-mon[54477]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:38:17.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:17 vm03 ceph-mon[57160]: from='client.24703 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "foo", "tierpool": "foo-cache"}]': finished 2026-03-10T08:38:17.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:17 vm03 ceph-mon[57160]: osdmap e80: 8 total, 8 up, 8 in 2026-03-10T08:38:17.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:17 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:38:17.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:17 vm03 ceph-mon[57160]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:38:17.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:17 vm03 ceph-mon[50703]: from='client.24703 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "foo", "tierpool": "foo-cache"}]': finished 2026-03-10T08:38:17.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:17 vm03 ceph-mon[50703]: osdmap e80: 8 total, 8 up, 8 in 2026-03-10T08:38:17.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:17 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:38:17.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:17 vm03 ceph-mon[50703]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:38:18.077 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRados::test_get_pool_base_tier PASSED [ 17%] 2026-03-10T08:38:18.087 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRados::test_get_fsid PASSED [ 18%] 2026-03-10T08:38:18.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:18 vm06 ceph-mon[54477]: osdmap e81: 8 total, 8 up, 8 in 2026-03-10T08:38:18.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:18 vm06 ceph-mon[54477]: pgmap v81: 196 pgs: 196 active+clean; 455 KiB data, 222 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:38:18.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:18 vm03 ceph-mon[57160]: osdmap e81: 8 total, 8 up, 8 in 2026-03-10T08:38:18.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:18 vm03 ceph-mon[57160]: pgmap v81: 196 pgs: 196 active+clean; 455 KiB data, 222 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:38:18.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:18 vm03 ceph-mon[50703]: osdmap e81: 8 total, 8 up, 8 in 2026-03-10T08:38:18.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:18 vm03 ceph-mon[50703]: pgmap v81: 196 pgs: 196 active+clean; 455 KiB data, 222 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:38:19.251 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRados::test_blocklist_add PASSED [ 19%] 2026-03-10T08:38:19.265 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRados::test_get_cluster_stats PASSED [ 20%] 2026-03-10T08:38:19.281 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRados::test_monitor_log PASSED [ 21%] 2026-03-10T08:38:19.527 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:38:19 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: ::ffff:192.168.123.106 - - [10/Mar/2026:08:38:19] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T08:38:19.528 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:19 vm03 ceph-mon[57160]: osdmap e82: 8 total, 8 up, 8 in 2026-03-10T08:38:19.528 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:19 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/2275827009' entity='client.admin' cmd=[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "1.2.3.4/123", "expire": 1.0}]: dispatch 2026-03-10T08:38:19.528 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:19 vm03 ceph-mon[57160]: from='client.24710 ' entity='client.admin' cmd=[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "1.2.3.4/123", "expire": 1.0}]: dispatch 2026-03-10T08:38:19.528 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:19 vm03 ceph-mon[50703]: osdmap e82: 8 total, 8 up, 8 in 2026-03-10T08:38:19.528 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:19 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/2275827009' entity='client.admin' cmd=[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "1.2.3.4/123", "expire": 1.0}]: dispatch 2026-03-10T08:38:19.528 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:19 vm03 ceph-mon[50703]: from='client.24710 ' entity='client.admin' cmd=[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "1.2.3.4/123", "expire": 1.0}]: dispatch 2026-03-10T08:38:19.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:19 vm06 ceph-mon[54477]: osdmap e82: 8 total, 8 up, 8 in 2026-03-10T08:38:19.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:19 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/2275827009' entity='client.admin' cmd=[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "1.2.3.4/123", "expire": 1.0}]: dispatch 2026-03-10T08:38:19.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:19 vm06 ceph-mon[54477]: from='client.24710 ' entity='client.admin' cmd=[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "1.2.3.4/123", "expire": 1.0}]: dispatch 2026-03-10T08:38:20.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:20 vm06 ceph-mon[54477]: from='client.24710 ' entity='client.admin' cmd='[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "1.2.3.4/123", "expire": 1.0}]': finished 2026-03-10T08:38:20.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:20 vm06 ceph-mon[54477]: osdmap e83: 8 total, 8 up, 8 in 2026-03-10T08:38:20.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:20 vm06 ceph-mon[54477]: pgmap v84: 164 pgs: 164 active+clean; 455 KiB data, 222 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:38:20.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:20 vm03 ceph-mon[57160]: from='client.24710 ' entity='client.admin' cmd='[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "1.2.3.4/123", "expire": 1.0}]': finished 2026-03-10T08:38:20.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:20 vm03 ceph-mon[57160]: osdmap e83: 8 total, 8 up, 8 in 2026-03-10T08:38:20.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:20 vm03 ceph-mon[57160]: pgmap v84: 164 pgs: 164 active+clean; 455 KiB data, 222 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:38:20.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:20 vm03 ceph-mon[50703]: from='client.24710 ' entity='client.admin' cmd='[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "1.2.3.4/123", "expire": 1.0}]': finished 2026-03-10T08:38:20.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:20 vm03 ceph-mon[50703]: osdmap e83: 8 total, 8 up, 8 in 2026-03-10T08:38:20.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:20 vm03 ceph-mon[50703]: pgmap v84: 164 pgs: 164 active+clean; 455 KiB data, 222 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:38:21.432 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_get_last_version PASSED [ 23%] 2026-03-10T08:38:21.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:21 vm06 ceph-mon[54477]: osdmap e84: 8 total, 8 up, 8 in 2026-03-10T08:38:21.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:21 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/180913324' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:38:21.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:21 vm06 ceph-mon[54477]: from='client.24719 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:38:21.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:21 vm03 ceph-mon[57160]: osdmap e84: 8 total, 8 up, 8 in 2026-03-10T08:38:21.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:21 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/180913324' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:38:21.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:21 vm03 ceph-mon[57160]: from='client.24719 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:38:21.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:21 vm03 ceph-mon[50703]: osdmap e84: 8 total, 8 up, 8 in 2026-03-10T08:38:21.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:21 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/180913324' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:38:21.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:21 vm03 ceph-mon[50703]: from='client.24719 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:38:22.589 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:38:22 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a[78511]: debug there is no tcmu-runner data available 2026-03-10T08:38:22.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:22 vm06 ceph-mon[54477]: from='client.24719 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:38:22.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:22 vm06 ceph-mon[54477]: osdmap e85: 8 total, 8 up, 8 in 2026-03-10T08:38:22.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:22 vm06 ceph-mon[54477]: pgmap v87: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 222 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:38:22.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:22 vm06 ceph-mon[54477]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:38:22.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:22 vm06 ceph-mon[54477]: osdmap e86: 8 total, 8 up, 8 in 2026-03-10T08:38:22.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:22 vm03 ceph-mon[57160]: from='client.24719 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:38:22.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:22 vm03 ceph-mon[57160]: osdmap e85: 8 total, 8 up, 8 in 2026-03-10T08:38:22.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:22 vm03 ceph-mon[57160]: pgmap v87: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 222 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:38:22.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:22 vm03 ceph-mon[57160]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:38:22.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:22 vm03 ceph-mon[57160]: osdmap e86: 8 total, 8 up, 8 in 2026-03-10T08:38:22.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:22 vm03 ceph-mon[50703]: from='client.24719 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:38:22.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:22 vm03 ceph-mon[50703]: osdmap e85: 8 total, 8 up, 8 in 2026-03-10T08:38:22.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:22 vm03 ceph-mon[50703]: pgmap v87: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 222 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:38:22.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:22 vm03 ceph-mon[50703]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:38:22.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:22 vm03 ceph-mon[50703]: osdmap e86: 8 total, 8 up, 8 in 2026-03-10T08:38:23.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:23 vm06 ceph-mon[54477]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:38:23.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:23 vm06 ceph-mon[54477]: osdmap e87: 8 total, 8 up, 8 in 2026-03-10T08:38:23.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:23 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/2147464312' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:38:23.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:23 vm06 ceph-mon[54477]: from='client.24736 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:38:23.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:23 vm03 ceph-mon[57160]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:38:23.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:23 vm03 ceph-mon[57160]: osdmap e87: 8 total, 8 up, 8 in 2026-03-10T08:38:23.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:23 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/2147464312' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:38:23.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:23 vm03 ceph-mon[57160]: from='client.24736 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:38:23.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:23 vm03 ceph-mon[50703]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:38:23.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:23 vm03 ceph-mon[50703]: osdmap e87: 8 total, 8 up, 8 in 2026-03-10T08:38:23.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:23 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/2147464312' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:38:23.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:23 vm03 ceph-mon[50703]: from='client.24736 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:38:24.445 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_get_stats PASSED [ 24%] 2026-03-10T08:38:24.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:24 vm06 ceph-mon[54477]: pgmap v90: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 235 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:38:24.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:24 vm06 ceph-mon[54477]: from='client.24736 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:38:24.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:24 vm06 ceph-mon[54477]: osdmap e88: 8 total, 8 up, 8 in 2026-03-10T08:38:24.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:24 vm03 ceph-mon[57160]: pgmap v90: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 235 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:38:24.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:24 vm03 ceph-mon[57160]: from='client.24736 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:38:24.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:24 vm03 ceph-mon[57160]: osdmap e88: 8 total, 8 up, 8 in 2026-03-10T08:38:24.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:24 vm03 ceph-mon[50703]: pgmap v90: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 235 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:38:24.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:24 vm03 ceph-mon[50703]: from='client.24736 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:38:24.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:24 vm03 ceph-mon[50703]: osdmap e88: 8 total, 8 up, 8 in 2026-03-10T08:38:25.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:25 vm06 ceph-mon[54477]: osdmap e89: 8 total, 8 up, 8 in 2026-03-10T08:38:25.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:25 vm06 ceph-mon[54477]: pgmap v93: 164 pgs: 164 active+clean; 455 KiB data, 235 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:38:25.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:25 vm03 ceph-mon[50703]: osdmap e89: 8 total, 8 up, 8 in 2026-03-10T08:38:25.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:25 vm03 ceph-mon[50703]: pgmap v93: 164 pgs: 164 active+clean; 455 KiB data, 235 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:38:25.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:25 vm03 ceph-mon[57160]: osdmap e89: 8 total, 8 up, 8 in 2026-03-10T08:38:25.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:25 vm03 ceph-mon[57160]: pgmap v93: 164 pgs: 164 active+clean; 455 KiB data, 235 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:38:26.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:26 vm06 ceph-mon[54477]: osdmap e90: 8 total, 8 up, 8 in 2026-03-10T08:38:26.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:26 vm03 ceph-mon[57160]: osdmap e90: 8 total, 8 up, 8 in 2026-03-10T08:38:26.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:26 vm03 ceph-mon[50703]: osdmap e90: 8 total, 8 up, 8 in 2026-03-10T08:38:27.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:27 vm06 ceph-mon[54477]: osdmap e91: 8 total, 8 up, 8 in 2026-03-10T08:38:27.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:27 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/1914305682' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:38:27.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:27 vm06 ceph-mon[54477]: pgmap v96: 196 pgs: 196 active+clean; 455 KiB data, 257 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T08:38:27.839 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:38:27 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=infra.usagestats t=2026-03-10T08:38:27.400737992Z level=info msg="Usage stats are ready to report" 2026-03-10T08:38:27.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:27 vm03 ceph-mon[57160]: osdmap e91: 8 total, 8 up, 8 in 2026-03-10T08:38:27.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:27 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/1914305682' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:38:27.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:27 vm03 ceph-mon[57160]: pgmap v96: 196 pgs: 196 active+clean; 455 KiB data, 257 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T08:38:27.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:27 vm03 ceph-mon[50703]: osdmap e91: 8 total, 8 up, 8 in 2026-03-10T08:38:27.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:27 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/1914305682' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:38:27.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:27 vm03 ceph-mon[50703]: pgmap v96: 196 pgs: 196 active+clean; 455 KiB data, 257 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T08:38:28.496 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_write PASSED [ 25%] 2026-03-10T08:38:28.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:28 vm06 ceph-mon[54477]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:38:28.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:28 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/1914305682' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:38:28.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:28 vm06 ceph-mon[54477]: osdmap e92: 8 total, 8 up, 8 in 2026-03-10T08:38:28.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:28 vm03 ceph-mon[57160]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:38:28.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:28 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/1914305682' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:38:28.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:28 vm03 ceph-mon[57160]: osdmap e92: 8 total, 8 up, 8 in 2026-03-10T08:38:28.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:28 vm03 ceph-mon[50703]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:38:28.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:28 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/1914305682' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:38:28.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:28 vm03 ceph-mon[50703]: osdmap e92: 8 total, 8 up, 8 in 2026-03-10T08:38:29.928 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:38:29 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: ::ffff:192.168.123.106 - - [10/Mar/2026:08:38:29] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T08:38:30.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:30 vm06 ceph-mon[54477]: osdmap e93: 8 total, 8 up, 8 in 2026-03-10T08:38:30.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:30 vm06 ceph-mon[54477]: pgmap v99: 164 pgs: 164 active+clean; 455 KiB data, 257 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:38:30.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:30 vm03 ceph-mon[57160]: osdmap e93: 8 total, 8 up, 8 in 2026-03-10T08:38:30.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:30 vm03 ceph-mon[57160]: pgmap v99: 164 pgs: 164 active+clean; 455 KiB data, 257 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:38:30.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:30 vm03 ceph-mon[50703]: osdmap e93: 8 total, 8 up, 8 in 2026-03-10T08:38:30.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:30 vm03 ceph-mon[50703]: pgmap v99: 164 pgs: 164 active+clean; 455 KiB data, 257 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:38:31.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:31 vm06 ceph-mon[54477]: osdmap e94: 8 total, 8 up, 8 in 2026-03-10T08:38:31.427 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:31 vm03 ceph-mon[57160]: osdmap e94: 8 total, 8 up, 8 in 2026-03-10T08:38:31.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:31 vm03 ceph-mon[50703]: osdmap e94: 8 total, 8 up, 8 in 2026-03-10T08:38:32.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:32 vm06 ceph-mon[54477]: osdmap e95: 8 total, 8 up, 8 in 2026-03-10T08:38:32.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:32 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/3110697637' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:38:32.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:32 vm06 ceph-mon[54477]: from='client.24748 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:38:32.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:32 vm06 ceph-mon[54477]: pgmap v102: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 257 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:38:32.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:32 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:38:32.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:32 vm03 ceph-mon[57160]: osdmap e95: 8 total, 8 up, 8 in 2026-03-10T08:38:32.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:32 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/3110697637' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:38:32.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:32 vm03 ceph-mon[57160]: from='client.24748 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:38:32.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:32 vm03 ceph-mon[57160]: pgmap v102: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 257 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:38:32.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:32 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:38:32.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:32 vm03 ceph-mon[50703]: osdmap e95: 8 total, 8 up, 8 in 2026-03-10T08:38:32.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:32 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/3110697637' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:38:32.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:32 vm03 ceph-mon[50703]: from='client.24748 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:38:32.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:32 vm03 ceph-mon[50703]: pgmap v102: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 257 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:38:32.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:32 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:38:32.839 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:38:32 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a[78511]: debug there is no tcmu-runner data available 2026-03-10T08:38:33.077 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_write_full PASSED [ 26%] 2026-03-10T08:38:33.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:33 vm06 ceph-mon[54477]: from='client.24748 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:38:33.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:33 vm06 ceph-mon[54477]: osdmap e96: 8 total, 8 up, 8 in 2026-03-10T08:38:33.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:33 vm06 ceph-mon[54477]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:38:33.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:33 vm03 ceph-mon[57160]: from='client.24748 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:38:33.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:33 vm03 ceph-mon[57160]: osdmap e96: 8 total, 8 up, 8 in 2026-03-10T08:38:33.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:33 vm03 ceph-mon[57160]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:38:33.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:33 vm03 ceph-mon[50703]: from='client.24748 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:38:33.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:33 vm03 ceph-mon[50703]: osdmap e96: 8 total, 8 up, 8 in 2026-03-10T08:38:33.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:33 vm03 ceph-mon[50703]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:38:34.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:34 vm06 ceph-mon[54477]: osdmap e97: 8 total, 8 up, 8 in 2026-03-10T08:38:34.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:34 vm06 ceph-mon[54477]: pgmap v105: 164 pgs: 164 active+clean; 455 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:38:34.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:34 vm03 ceph-mon[57160]: osdmap e97: 8 total, 8 up, 8 in 2026-03-10T08:38:34.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:34 vm03 ceph-mon[57160]: pgmap v105: 164 pgs: 164 active+clean; 455 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:38:34.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:34 vm03 ceph-mon[50703]: osdmap e97: 8 total, 8 up, 8 in 2026-03-10T08:38:34.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:34 vm03 ceph-mon[50703]: pgmap v105: 164 pgs: 164 active+clean; 455 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:38:35.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:35 vm06 ceph-mon[54477]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:38:35.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:35 vm06 ceph-mon[54477]: osdmap e98: 8 total, 8 up, 8 in 2026-03-10T08:38:35.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:35 vm03 ceph-mon[57160]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:38:35.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:35 vm03 ceph-mon[57160]: osdmap e98: 8 total, 8 up, 8 in 2026-03-10T08:38:35.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:35 vm03 ceph-mon[50703]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:38:35.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:35 vm03 ceph-mon[50703]: osdmap e98: 8 total, 8 up, 8 in 2026-03-10T08:38:36.629 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:36 vm06 ceph-mon[54477]: osdmap e99: 8 total, 8 up, 8 in 2026-03-10T08:38:36.629 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:36 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/3215468014' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:38:36.629 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:36 vm06 ceph-mon[54477]: from='client.24754 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:38:36.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:36 vm03 ceph-mon[57160]: osdmap e99: 8 total, 8 up, 8 in 2026-03-10T08:38:36.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:36 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/3215468014' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:38:36.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:36 vm03 ceph-mon[57160]: from='client.24754 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:38:36.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:36 vm03 ceph-mon[57160]: pgmap v108: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:38:36.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:36 vm03 ceph-mon[50703]: osdmap e99: 8 total, 8 up, 8 in 2026-03-10T08:38:36.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:36 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/3215468014' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:38:36.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:36 vm03 ceph-mon[50703]: from='client.24754 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:38:36.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:36 vm03 ceph-mon[50703]: pgmap v108: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:38:37.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:36 vm06 ceph-mon[54477]: pgmap v108: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:38:37.630 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_writesame PASSED [ 27%] 2026-03-10T08:38:37.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:37 vm03 ceph-mon[57160]: from='client.24754 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:38:37.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:37 vm03 ceph-mon[57160]: osdmap e100: 8 total, 8 up, 8 in 2026-03-10T08:38:37.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:37 vm03 ceph-mon[57160]: pgmap v110: 196 pgs: 196 active+clean; 455 KiB data, 332 MiB used, 160 GiB / 160 GiB avail; 1.4 KiB/s rd, 241 B/s wr, 1 op/s 2026-03-10T08:38:37.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:37 vm03 ceph-mon[57160]: osdmap e101: 8 total, 8 up, 8 in 2026-03-10T08:38:37.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:37 vm03 ceph-mon[50703]: from='client.24754 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:38:37.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:37 vm03 ceph-mon[50703]: osdmap e100: 8 total, 8 up, 8 in 2026-03-10T08:38:37.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:37 vm03 ceph-mon[50703]: pgmap v110: 196 pgs: 196 active+clean; 455 KiB data, 332 MiB used, 160 GiB / 160 GiB avail; 1.4 KiB/s rd, 241 B/s wr, 1 op/s 2026-03-10T08:38:37.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:37 vm03 ceph-mon[50703]: osdmap e101: 8 total, 8 up, 8 in 2026-03-10T08:38:38.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:37 vm06 ceph-mon[54477]: from='client.24754 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:38:38.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:37 vm06 ceph-mon[54477]: osdmap e100: 8 total, 8 up, 8 in 2026-03-10T08:38:38.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:37 vm06 ceph-mon[54477]: pgmap v110: 196 pgs: 196 active+clean; 455 KiB data, 332 MiB used, 160 GiB / 160 GiB avail; 1.4 KiB/s rd, 241 B/s wr, 1 op/s 2026-03-10T08:38:38.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:37 vm06 ceph-mon[54477]: osdmap e101: 8 total, 8 up, 8 in 2026-03-10T08:38:39.777 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:39 vm03 ceph-mon[57160]: osdmap e102: 8 total, 8 up, 8 in 2026-03-10T08:38:39.777 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:39 vm03 ceph-mon[57160]: pgmap v113: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 332 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:38:39.777 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:38:39 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: ::ffff:192.168.123.106 - - [10/Mar/2026:08:38:39] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T08:38:40.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:39 vm06 ceph-mon[54477]: osdmap e102: 8 total, 8 up, 8 in 2026-03-10T08:38:40.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:39 vm06 ceph-mon[54477]: pgmap v113: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 332 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:38:40.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:39 vm03 ceph-mon[50703]: osdmap e102: 8 total, 8 up, 8 in 2026-03-10T08:38:40.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:39 vm03 ceph-mon[50703]: pgmap v113: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 332 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:38:41.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:40 vm06 ceph-mon[54477]: osdmap e103: 8 total, 8 up, 8 in 2026-03-10T08:38:41.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:40 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/3374818344' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:38:41.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:40 vm03 ceph-mon[57160]: osdmap e103: 8 total, 8 up, 8 in 2026-03-10T08:38:41.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:40 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/3374818344' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:38:41.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:40 vm03 ceph-mon[50703]: osdmap e103: 8 total, 8 up, 8 in 2026-03-10T08:38:41.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:40 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/3374818344' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:38:41.796 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_append PASSED [ 28%] 2026-03-10T08:38:42.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:41 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/3374818344' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:38:42.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:41 vm06 ceph-mon[54477]: osdmap e104: 8 total, 8 up, 8 in 2026-03-10T08:38:42.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:41 vm06 ceph-mon[54477]: pgmap v116: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 332 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:38:42.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:41 vm06 ceph-mon[54477]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:38:42.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:41 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/3374818344' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:38:42.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:41 vm03 ceph-mon[57160]: osdmap e104: 8 total, 8 up, 8 in 2026-03-10T08:38:42.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:41 vm03 ceph-mon[57160]: pgmap v116: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 332 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:38:42.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:41 vm03 ceph-mon[57160]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:38:42.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:41 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/3374818344' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:38:42.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:41 vm03 ceph-mon[50703]: osdmap e104: 8 total, 8 up, 8 in 2026-03-10T08:38:42.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:41 vm03 ceph-mon[50703]: pgmap v116: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 332 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:38:42.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:41 vm03 ceph-mon[50703]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:38:42.822 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:38:42 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a[78511]: debug there is no tcmu-runner data available 2026-03-10T08:38:43.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:42 vm06 ceph-mon[54477]: osdmap e105: 8 total, 8 up, 8 in 2026-03-10T08:38:43.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:42 vm06 ceph-mon[54477]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:38:43.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:42 vm03 ceph-mon[57160]: osdmap e105: 8 total, 8 up, 8 in 2026-03-10T08:38:43.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:42 vm03 ceph-mon[57160]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:38:43.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:42 vm03 ceph-mon[50703]: osdmap e105: 8 total, 8 up, 8 in 2026-03-10T08:38:43.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:42 vm03 ceph-mon[50703]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:38:44.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:43 vm03 ceph-mon[57160]: osdmap e106: 8 total, 8 up, 8 in 2026-03-10T08:38:44.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:43 vm03 ceph-mon[57160]: pgmap v119: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 337 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:38:44.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:43 vm03 ceph-mon[57160]: osdmap e107: 8 total, 8 up, 8 in 2026-03-10T08:38:44.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:43 vm03 ceph-mon[50703]: osdmap e106: 8 total, 8 up, 8 in 2026-03-10T08:38:44.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:43 vm03 ceph-mon[50703]: pgmap v119: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 337 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:38:44.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:43 vm03 ceph-mon[50703]: osdmap e107: 8 total, 8 up, 8 in 2026-03-10T08:38:44.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:43 vm06 ceph-mon[54477]: osdmap e106: 8 total, 8 up, 8 in 2026-03-10T08:38:44.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:43 vm06 ceph-mon[54477]: pgmap v119: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 337 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:38:44.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:43 vm06 ceph-mon[54477]: osdmap e107: 8 total, 8 up, 8 in 2026-03-10T08:38:45.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:44 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/2777980978' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:38:45.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:44 vm03 ceph-mon[57160]: from='client.24760 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:38:45.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:44 vm03 ceph-mon[57160]: from='client.24760 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:38:45.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:44 vm03 ceph-mon[57160]: osdmap e108: 8 total, 8 up, 8 in 2026-03-10T08:38:45.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:44 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/2777980978' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:38:45.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:44 vm03 ceph-mon[50703]: from='client.24760 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:38:45.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:44 vm03 ceph-mon[50703]: from='client.24760 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:38:45.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:44 vm03 ceph-mon[50703]: osdmap e108: 8 total, 8 up, 8 in 2026-03-10T08:38:45.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:44 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/2777980978' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:38:45.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:44 vm06 ceph-mon[54477]: from='client.24760 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:38:45.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:44 vm06 ceph-mon[54477]: from='client.24760 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:38:45.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:44 vm06 ceph-mon[54477]: osdmap e108: 8 total, 8 up, 8 in 2026-03-10T08:38:45.830 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_write_zeros PASSED [ 29%] 2026-03-10T08:38:46.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:46 vm06 ceph-mon[54477]: pgmap v122: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 337 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:38:46.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:46 vm06 ceph-mon[54477]: osdmap e109: 8 total, 8 up, 8 in 2026-03-10T08:38:46.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:46 vm03 ceph-mon[57160]: pgmap v122: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 337 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:38:46.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:46 vm03 ceph-mon[57160]: osdmap e109: 8 total, 8 up, 8 in 2026-03-10T08:38:46.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:46 vm03 ceph-mon[50703]: pgmap v122: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 337 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:38:46.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:46 vm03 ceph-mon[50703]: osdmap e109: 8 total, 8 up, 8 in 2026-03-10T08:38:47.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:47 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:38:47.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:47 vm06 ceph-mon[54477]: osdmap e110: 8 total, 8 up, 8 in 2026-03-10T08:38:47.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:47 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:38:47.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:47 vm03 ceph-mon[57160]: osdmap e110: 8 total, 8 up, 8 in 2026-03-10T08:38:47.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:47 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:38:47.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:47 vm03 ceph-mon[50703]: osdmap e110: 8 total, 8 up, 8 in 2026-03-10T08:38:48.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:48 vm06 ceph-mon[54477]: pgmap v125: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 342 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:38:48.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:48 vm06 ceph-mon[54477]: osdmap e111: 8 total, 8 up, 8 in 2026-03-10T08:38:48.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:48 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/4283626639' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:38:48.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:48 vm06 ceph-mon[54477]: from='client.24766 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:38:48.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:48 vm03 ceph-mon[57160]: pgmap v125: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 342 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:38:48.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:48 vm03 ceph-mon[57160]: osdmap e111: 8 total, 8 up, 8 in 2026-03-10T08:38:48.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:48 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/4283626639' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:38:48.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:48 vm03 ceph-mon[57160]: from='client.24766 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:38:48.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:48 vm03 ceph-mon[50703]: pgmap v125: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 342 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:38:48.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:48 vm03 ceph-mon[50703]: osdmap e111: 8 total, 8 up, 8 in 2026-03-10T08:38:48.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:48 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/4283626639' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:38:48.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:48 vm03 ceph-mon[50703]: from='client.24766 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:38:49.844 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_trunc PASSED [ 30%] 2026-03-10T08:38:49.852 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:38:49 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: ::ffff:192.168.123.106 - - [10/Mar/2026:08:38:49] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T08:38:50.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:49 vm03 ceph-mon[57160]: from='client.24766 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:38:50.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:49 vm03 ceph-mon[57160]: osdmap e112: 8 total, 8 up, 8 in 2026-03-10T08:38:50.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:49 vm03 ceph-mon[57160]: pgmap v128: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 342 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:38:50.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:49 vm03 ceph-mon[50703]: from='client.24766 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:38:50.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:49 vm03 ceph-mon[50703]: osdmap e112: 8 total, 8 up, 8 in 2026-03-10T08:38:50.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:49 vm03 ceph-mon[50703]: pgmap v128: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 342 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:38:50.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:49 vm06 ceph-mon[54477]: from='client.24766 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:38:50.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:49 vm06 ceph-mon[54477]: osdmap e112: 8 total, 8 up, 8 in 2026-03-10T08:38:50.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:49 vm06 ceph-mon[54477]: pgmap v128: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 342 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:38:51.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:50 vm03 ceph-mon[57160]: osdmap e113: 8 total, 8 up, 8 in 2026-03-10T08:38:51.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:50 vm03 ceph-mon[50703]: osdmap e113: 8 total, 8 up, 8 in 2026-03-10T08:38:51.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:50 vm06 ceph-mon[54477]: osdmap e113: 8 total, 8 up, 8 in 2026-03-10T08:38:52.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:51 vm03 ceph-mon[57160]: osdmap e114: 8 total, 8 up, 8 in 2026-03-10T08:38:52.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:51 vm03 ceph-mon[57160]: pgmap v131: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 342 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:38:52.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:51 vm03 ceph-mon[50703]: osdmap e114: 8 total, 8 up, 8 in 2026-03-10T08:38:52.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:51 vm03 ceph-mon[50703]: pgmap v131: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 342 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:38:52.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:51 vm06 ceph-mon[54477]: osdmap e114: 8 total, 8 up, 8 in 2026-03-10T08:38:52.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:51 vm06 ceph-mon[54477]: pgmap v131: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 342 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:38:52.839 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:38:52 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a[78511]: debug there is no tcmu-runner data available 2026-03-10T08:38:53.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:52 vm06 ceph-mon[54477]: osdmap e115: 8 total, 8 up, 8 in 2026-03-10T08:38:53.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:52 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/1614681912' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:38:53.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:52 vm06 ceph-mon[54477]: from='client.24769 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:38:53.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:52 vm06 ceph-mon[54477]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:38:53.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:52 vm06 ceph-mon[54477]: from='client.24769 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:38:53.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:52 vm06 ceph-mon[54477]: osdmap e116: 8 total, 8 up, 8 in 2026-03-10T08:38:53.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:52 vm03 ceph-mon[57160]: osdmap e115: 8 total, 8 up, 8 in 2026-03-10T08:38:53.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:52 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/1614681912' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:38:53.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:52 vm03 ceph-mon[57160]: from='client.24769 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:38:53.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:52 vm03 ceph-mon[57160]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:38:53.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:52 vm03 ceph-mon[57160]: from='client.24769 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:38:53.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:52 vm03 ceph-mon[57160]: osdmap e116: 8 total, 8 up, 8 in 2026-03-10T08:38:53.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:52 vm03 ceph-mon[50703]: osdmap e115: 8 total, 8 up, 8 in 2026-03-10T08:38:53.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:52 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/1614681912' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:38:53.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:52 vm03 ceph-mon[50703]: from='client.24769 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:38:53.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:52 vm03 ceph-mon[50703]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:38:53.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:52 vm03 ceph-mon[50703]: from='client.24769 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:38:53.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:52 vm03 ceph-mon[50703]: osdmap e116: 8 total, 8 up, 8 in 2026-03-10T08:38:53.876 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_cmpext PASSED [ 31%] 2026-03-10T08:38:54.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:53 vm06 ceph-mon[54477]: pgmap v134: 196 pgs: 196 active+clean; 455 KiB data, 342 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T08:38:54.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:53 vm06 ceph-mon[54477]: osdmap e117: 8 total, 8 up, 8 in 2026-03-10T08:38:54.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:53 vm03 ceph-mon[57160]: pgmap v134: 196 pgs: 196 active+clean; 455 KiB data, 342 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T08:38:54.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:53 vm03 ceph-mon[57160]: osdmap e117: 8 total, 8 up, 8 in 2026-03-10T08:38:54.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:53 vm03 ceph-mon[50703]: pgmap v134: 196 pgs: 196 active+clean; 455 KiB data, 342 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T08:38:54.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:53 vm03 ceph-mon[50703]: osdmap e117: 8 total, 8 up, 8 in 2026-03-10T08:38:56.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:55 vm06 ceph-mon[54477]: osdmap e118: 8 total, 8 up, 8 in 2026-03-10T08:38:56.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:55 vm06 ceph-mon[54477]: pgmap v137: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 342 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:38:56.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:55 vm03 ceph-mon[50703]: osdmap e118: 8 total, 8 up, 8 in 2026-03-10T08:38:56.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:55 vm03 ceph-mon[50703]: pgmap v137: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 342 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:38:56.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:55 vm03 ceph-mon[57160]: osdmap e118: 8 total, 8 up, 8 in 2026-03-10T08:38:56.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:55 vm03 ceph-mon[57160]: pgmap v137: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 342 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:38:57.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:56 vm06 ceph-mon[54477]: osdmap e119: 8 total, 8 up, 8 in 2026-03-10T08:38:57.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:56 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/1080096361' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:38:57.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:56 vm06 ceph-mon[54477]: from='client.24775 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:38:57.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:56 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:38:57.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:56 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:38:57.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:56 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:38:57.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:56 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:38:57.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:56 vm03 ceph-mon[57160]: osdmap e119: 8 total, 8 up, 8 in 2026-03-10T08:38:57.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:56 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/1080096361' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:38:57.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:56 vm03 ceph-mon[57160]: from='client.24775 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:38:57.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:56 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:38:57.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:56 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:38:57.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:56 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:38:57.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:56 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:38:57.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:56 vm03 ceph-mon[50703]: osdmap e119: 8 total, 8 up, 8 in 2026-03-10T08:38:57.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:56 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/1080096361' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:38:57.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:56 vm03 ceph-mon[50703]: from='client.24775 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:38:57.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:56 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:38:57.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:56 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:38:57.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:56 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:38:57.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:56 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:38:57.949 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_list_objects_empty PASSED [ 32%] 2026-03-10T08:38:58.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:57 vm06 ceph-mon[54477]: from='client.24775 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:38:58.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:57 vm06 ceph-mon[54477]: osdmap e120: 8 total, 8 up, 8 in 2026-03-10T08:38:58.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:57 vm06 ceph-mon[54477]: pgmap v140: 196 pgs: 196 active+clean; 455 KiB data, 343 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T08:38:58.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:57 vm06 ceph-mon[54477]: osdmap e121: 8 total, 8 up, 8 in 2026-03-10T08:38:58.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:57 vm03 ceph-mon[57160]: from='client.24775 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:38:58.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:57 vm03 ceph-mon[57160]: osdmap e120: 8 total, 8 up, 8 in 2026-03-10T08:38:58.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:57 vm03 ceph-mon[57160]: pgmap v140: 196 pgs: 196 active+clean; 455 KiB data, 343 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T08:38:58.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:57 vm03 ceph-mon[57160]: osdmap e121: 8 total, 8 up, 8 in 2026-03-10T08:38:58.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:57 vm03 ceph-mon[50703]: from='client.24775 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:38:58.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:57 vm03 ceph-mon[50703]: osdmap e120: 8 total, 8 up, 8 in 2026-03-10T08:38:58.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:57 vm03 ceph-mon[50703]: pgmap v140: 196 pgs: 196 active+clean; 455 KiB data, 343 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T08:38:58.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:57 vm03 ceph-mon[50703]: osdmap e121: 8 total, 8 up, 8 in 2026-03-10T08:38:59.928 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:38:59 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: ::ffff:192.168.123.106 - - [10/Mar/2026:08:38:59] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T08:39:00.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:59 vm06 ceph-mon[54477]: osdmap e122: 8 total, 8 up, 8 in 2026-03-10T08:39:00.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:38:59 vm06 ceph-mon[54477]: pgmap v143: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 343 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:00.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:59 vm03 ceph-mon[57160]: osdmap e122: 8 total, 8 up, 8 in 2026-03-10T08:39:00.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:38:59 vm03 ceph-mon[57160]: pgmap v143: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 343 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:00.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:59 vm03 ceph-mon[50703]: osdmap e122: 8 total, 8 up, 8 in 2026-03-10T08:39:00.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:38:59 vm03 ceph-mon[50703]: pgmap v143: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 343 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:01.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:01 vm06 ceph-mon[54477]: osdmap e123: 8 total, 8 up, 8 in 2026-03-10T08:39:01.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:01 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/4172640580' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:39:01.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:01 vm06 ceph-mon[54477]: from='client.24749 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:39:01.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:01 vm03 ceph-mon[57160]: osdmap e123: 8 total, 8 up, 8 in 2026-03-10T08:39:01.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:01 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/4172640580' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:39:01.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:01 vm03 ceph-mon[57160]: from='client.24749 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:39:01.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:01 vm03 ceph-mon[50703]: osdmap e123: 8 total, 8 up, 8 in 2026-03-10T08:39:01.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:01 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/4172640580' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:39:01.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:01 vm03 ceph-mon[50703]: from='client.24749 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:39:02.045 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_read_crc PASSED [ 34%] 2026-03-10T08:39:02.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:02 vm06 ceph-mon[54477]: from='client.24749 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:39:02.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:02 vm06 ceph-mon[54477]: osdmap e124: 8 total, 8 up, 8 in 2026-03-10T08:39:02.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:02 vm06 ceph-mon[54477]: pgmap v146: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 343 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:39:02.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:02 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:39:02.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:02 vm03 ceph-mon[57160]: from='client.24749 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:39:02.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:02 vm03 ceph-mon[57160]: osdmap e124: 8 total, 8 up, 8 in 2026-03-10T08:39:02.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:02 vm03 ceph-mon[57160]: pgmap v146: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 343 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:39:02.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:02 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:39:02.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:02 vm03 ceph-mon[50703]: from='client.24749 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:39:02.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:02 vm03 ceph-mon[50703]: osdmap e124: 8 total, 8 up, 8 in 2026-03-10T08:39:02.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:02 vm03 ceph-mon[50703]: pgmap v146: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 343 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:39:02.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:02 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:39:02.839 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:39:02 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a[78511]: debug there is no tcmu-runner data available 2026-03-10T08:39:03.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:03 vm06 ceph-mon[54477]: osdmap e125: 8 total, 8 up, 8 in 2026-03-10T08:39:03.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:03 vm06 ceph-mon[54477]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:39:03.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:03 vm03 ceph-mon[57160]: osdmap e125: 8 total, 8 up, 8 in 2026-03-10T08:39:03.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:03 vm03 ceph-mon[57160]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:39:03.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:03 vm03 ceph-mon[50703]: osdmap e125: 8 total, 8 up, 8 in 2026-03-10T08:39:03.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:03 vm03 ceph-mon[50703]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:39:04.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:04 vm03 ceph-mon[57160]: osdmap e126: 8 total, 8 up, 8 in 2026-03-10T08:39:04.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:04 vm03 ceph-mon[57160]: pgmap v149: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 343 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:04.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:04 vm03 ceph-mon[57160]: osdmap e127: 8 total, 8 up, 8 in 2026-03-10T08:39:04.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:04 vm03 ceph-mon[50703]: osdmap e126: 8 total, 8 up, 8 in 2026-03-10T08:39:04.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:04 vm03 ceph-mon[50703]: pgmap v149: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 343 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:04.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:04 vm03 ceph-mon[50703]: osdmap e127: 8 total, 8 up, 8 in 2026-03-10T08:39:04.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:04 vm06 ceph-mon[54477]: osdmap e126: 8 total, 8 up, 8 in 2026-03-10T08:39:04.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:04 vm06 ceph-mon[54477]: pgmap v149: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 343 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:04.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:04 vm06 ceph-mon[54477]: osdmap e127: 8 total, 8 up, 8 in 2026-03-10T08:39:05.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:05 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/1927504767' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:39:05.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:05 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/1927504767' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:39:05.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:05 vm03 ceph-mon[57160]: osdmap e128: 8 total, 8 up, 8 in 2026-03-10T08:39:05.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:05 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/1927504767' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:39:05.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:05 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/1927504767' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:39:05.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:05 vm03 ceph-mon[50703]: osdmap e128: 8 total, 8 up, 8 in 2026-03-10T08:39:05.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:05 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/1927504767' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:39:05.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:05 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/1927504767' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:39:05.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:05 vm06 ceph-mon[54477]: osdmap e128: 8 total, 8 up, 8 in 2026-03-10T08:39:06.070 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_list_objects PASSED [ 35%] 2026-03-10T08:39:06.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:06 vm03 ceph-mon[57160]: pgmap v152: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 343 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:06.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:06 vm03 ceph-mon[57160]: osdmap e129: 8 total, 8 up, 8 in 2026-03-10T08:39:06.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:06 vm03 ceph-mon[50703]: pgmap v152: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 343 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:06.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:06 vm03 ceph-mon[50703]: osdmap e129: 8 total, 8 up, 8 in 2026-03-10T08:39:06.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:06 vm06 ceph-mon[54477]: pgmap v152: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 343 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:06.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:06 vm06 ceph-mon[54477]: osdmap e129: 8 total, 8 up, 8 in 2026-03-10T08:39:08.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:08 vm06 ceph-mon[54477]: osdmap e130: 8 total, 8 up, 8 in 2026-03-10T08:39:08.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:08 vm06 ceph-mon[54477]: pgmap v155: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 344 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:08.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:08 vm03 ceph-mon[57160]: osdmap e130: 8 total, 8 up, 8 in 2026-03-10T08:39:08.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:08 vm03 ceph-mon[57160]: pgmap v155: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 344 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:08.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:08 vm03 ceph-mon[50703]: osdmap e130: 8 total, 8 up, 8 in 2026-03-10T08:39:08.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:08 vm03 ceph-mon[50703]: pgmap v155: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 344 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:09.526 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:09 vm03 ceph-mon[57160]: osdmap e131: 8 total, 8 up, 8 in 2026-03-10T08:39:09.527 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:09 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/2654812731' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:39:09.527 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:09 vm03 ceph-mon[57160]: from='client.24790 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:39:09.527 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:09 vm03 ceph-mon[50703]: osdmap e131: 8 total, 8 up, 8 in 2026-03-10T08:39:09.527 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:09 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/2654812731' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:39:09.527 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:09 vm03 ceph-mon[50703]: from='client.24790 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:39:09.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:09 vm06 ceph-mon[54477]: osdmap e131: 8 total, 8 up, 8 in 2026-03-10T08:39:09.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:09 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/2654812731' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:39:09.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:09 vm06 ceph-mon[54477]: from='client.24790 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:39:09.928 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:39:09 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: ::ffff:192.168.123.106 - - [10/Mar/2026:08:39:09] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T08:39:10.589 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_list_ns_objects PASSED [ 36%] 2026-03-10T08:39:10.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:10 vm06 ceph-mon[54477]: from='client.24790 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:39:10.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:10 vm06 ceph-mon[54477]: osdmap e132: 8 total, 8 up, 8 in 2026-03-10T08:39:10.840 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:10 vm06 ceph-mon[54477]: pgmap v158: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 344 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:10.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:10 vm03 ceph-mon[57160]: from='client.24790 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:39:10.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:10 vm03 ceph-mon[57160]: osdmap e132: 8 total, 8 up, 8 in 2026-03-10T08:39:10.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:10 vm03 ceph-mon[57160]: pgmap v158: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 344 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:10.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:10 vm03 ceph-mon[50703]: from='client.24790 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:39:10.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:10 vm03 ceph-mon[50703]: osdmap e132: 8 total, 8 up, 8 in 2026-03-10T08:39:10.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:10 vm03 ceph-mon[50703]: pgmap v158: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 344 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:11.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:11 vm06 ceph-mon[54477]: osdmap e133: 8 total, 8 up, 8 in 2026-03-10T08:39:11.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:11 vm06 ceph-mon[54477]: pgmap v160: 164 pgs: 164 active+clean; 455 KiB data, 344 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:11.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:11 vm03 ceph-mon[57160]: osdmap e133: 8 total, 8 up, 8 in 2026-03-10T08:39:11.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:11 vm03 ceph-mon[57160]: pgmap v160: 164 pgs: 164 active+clean; 455 KiB data, 344 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:11.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:11 vm03 ceph-mon[50703]: osdmap e133: 8 total, 8 up, 8 in 2026-03-10T08:39:11.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:11 vm03 ceph-mon[50703]: pgmap v160: 164 pgs: 164 active+clean; 455 KiB data, 344 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:12.839 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:39:12 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a[78511]: debug there is no tcmu-runner data available 2026-03-10T08:39:12.840 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:12 vm06 ceph-mon[54477]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:39:12.840 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:12 vm06 ceph-mon[54477]: osdmap e134: 8 total, 8 up, 8 in 2026-03-10T08:39:12.840 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:12 vm06 ceph-mon[54477]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:39:12.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:12 vm03 ceph-mon[57160]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:39:12.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:12 vm03 ceph-mon[57160]: osdmap e134: 8 total, 8 up, 8 in 2026-03-10T08:39:12.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:12 vm03 ceph-mon[57160]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:39:12.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:12 vm03 ceph-mon[50703]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:39:12.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:12 vm03 ceph-mon[50703]: osdmap e134: 8 total, 8 up, 8 in 2026-03-10T08:39:12.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:12 vm03 ceph-mon[50703]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:39:13.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:13 vm03 ceph-mon[57160]: osdmap e135: 8 total, 8 up, 8 in 2026-03-10T08:39:13.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:13 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/52966294' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:39:13.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:13 vm03 ceph-mon[57160]: from='client.24761 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:39:13.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:13 vm03 ceph-mon[57160]: pgmap v163: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 344 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:13.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:13 vm03 ceph-mon[57160]: from='client.24761 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:39:13.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:13 vm03 ceph-mon[57160]: osdmap e136: 8 total, 8 up, 8 in 2026-03-10T08:39:13.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:13 vm03 ceph-mon[50703]: osdmap e135: 8 total, 8 up, 8 in 2026-03-10T08:39:13.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:13 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/52966294' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:39:13.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:13 vm03 ceph-mon[50703]: from='client.24761 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:39:13.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:13 vm03 ceph-mon[50703]: pgmap v163: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 344 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:13.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:13 vm03 ceph-mon[50703]: from='client.24761 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:39:13.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:13 vm03 ceph-mon[50703]: osdmap e136: 8 total, 8 up, 8 in 2026-03-10T08:39:14.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:13 vm06 ceph-mon[54477]: osdmap e135: 8 total, 8 up, 8 in 2026-03-10T08:39:14.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:13 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/52966294' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:39:14.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:13 vm06 ceph-mon[54477]: from='client.24761 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:39:14.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:13 vm06 ceph-mon[54477]: pgmap v163: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 344 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:14.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:13 vm06 ceph-mon[54477]: from='client.24761 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:39:14.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:13 vm06 ceph-mon[54477]: osdmap e136: 8 total, 8 up, 8 in 2026-03-10T08:39:14.596 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_xattrs PASSED [ 37%] 2026-03-10T08:39:15.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:15 vm03 ceph-mon[57160]: osdmap e137: 8 total, 8 up, 8 in 2026-03-10T08:39:15.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:15 vm03 ceph-mon[57160]: pgmap v166: 164 pgs: 164 active+clean; 455 KiB data, 344 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:15.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:15 vm03 ceph-mon[50703]: osdmap e137: 8 total, 8 up, 8 in 2026-03-10T08:39:15.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:15 vm03 ceph-mon[50703]: pgmap v166: 164 pgs: 164 active+clean; 455 KiB data, 344 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:16.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:15 vm06 ceph-mon[54477]: osdmap e137: 8 total, 8 up, 8 in 2026-03-10T08:39:16.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:15 vm06 ceph-mon[54477]: pgmap v166: 164 pgs: 164 active+clean; 455 KiB data, 344 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:16.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:16 vm03 ceph-mon[57160]: osdmap e138: 8 total, 8 up, 8 in 2026-03-10T08:39:16.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:16 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:39:16.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:16 vm03 ceph-mon[50703]: osdmap e138: 8 total, 8 up, 8 in 2026-03-10T08:39:16.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:16 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:39:17.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:16 vm06 ceph-mon[54477]: osdmap e138: 8 total, 8 up, 8 in 2026-03-10T08:39:17.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:16 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:39:17.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:17 vm03 ceph-mon[57160]: osdmap e139: 8 total, 8 up, 8 in 2026-03-10T08:39:17.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:17 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/1164491236' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:39:17.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:17 vm03 ceph-mon[57160]: from='client.24802 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:39:17.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:17 vm03 ceph-mon[57160]: pgmap v169: 196 pgs: 196 active+clean; 455 KiB data, 345 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 4 op/s 2026-03-10T08:39:17.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:17 vm03 ceph-mon[57160]: from='client.24802 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:39:17.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:17 vm03 ceph-mon[57160]: osdmap e140: 8 total, 8 up, 8 in 2026-03-10T08:39:17.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:17 vm03 ceph-mon[50703]: osdmap e139: 8 total, 8 up, 8 in 2026-03-10T08:39:17.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:17 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/1164491236' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:39:17.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:17 vm03 ceph-mon[50703]: from='client.24802 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:39:17.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:17 vm03 ceph-mon[50703]: pgmap v169: 196 pgs: 196 active+clean; 455 KiB data, 345 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 4 op/s 2026-03-10T08:39:17.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:17 vm03 ceph-mon[50703]: from='client.24802 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:39:17.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:17 vm03 ceph-mon[50703]: osdmap e140: 8 total, 8 up, 8 in 2026-03-10T08:39:18.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:17 vm06 ceph-mon[54477]: osdmap e139: 8 total, 8 up, 8 in 2026-03-10T08:39:18.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:17 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/1164491236' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:39:18.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:17 vm06 ceph-mon[54477]: from='client.24802 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:39:18.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:17 vm06 ceph-mon[54477]: pgmap v169: 196 pgs: 196 active+clean; 455 KiB data, 345 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 4 op/s 2026-03-10T08:39:18.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:17 vm06 ceph-mon[54477]: from='client.24802 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:39:18.090 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:17 vm06 ceph-mon[54477]: osdmap e140: 8 total, 8 up, 8 in 2026-03-10T08:39:18.616 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_obj_xattrs PASSED [ 38%] 2026-03-10T08:39:18.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:18 vm03 ceph-mon[57160]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:39:18.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:18 vm03 ceph-mon[57160]: osdmap e141: 8 total, 8 up, 8 in 2026-03-10T08:39:18.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:18 vm03 ceph-mon[50703]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:39:18.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:18 vm03 ceph-mon[50703]: osdmap e141: 8 total, 8 up, 8 in 2026-03-10T08:39:19.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:18 vm06 ceph-mon[54477]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:39:19.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:18 vm06 ceph-mon[54477]: osdmap e141: 8 total, 8 up, 8 in 2026-03-10T08:39:19.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:19 vm03 ceph-mon[57160]: pgmap v172: 164 pgs: 164 active+clean; 455 KiB data, 345 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:19.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:19 vm03 ceph-mon[57160]: osdmap e142: 8 total, 8 up, 8 in 2026-03-10T08:39:19.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:19 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/3001188867' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:39:19.928 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:39:19 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: ::ffff:192.168.123.106 - - [10/Mar/2026:08:39:19] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T08:39:19.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:19 vm03 ceph-mon[50703]: pgmap v172: 164 pgs: 164 active+clean; 455 KiB data, 345 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:19.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:19 vm03 ceph-mon[50703]: osdmap e142: 8 total, 8 up, 8 in 2026-03-10T08:39:19.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:19 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/3001188867' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:39:20.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:19 vm06 ceph-mon[54477]: pgmap v172: 164 pgs: 164 active+clean; 455 KiB data, 345 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:20.090 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:19 vm06 ceph-mon[54477]: osdmap e142: 8 total, 8 up, 8 in 2026-03-10T08:39:20.090 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:19 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/3001188867' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:39:21.714 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_get_pool_id PASSED [ 39%] 2026-03-10T08:39:22.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:21 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/3001188867' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:39:22.090 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:21 vm06 ceph-mon[54477]: osdmap e143: 8 total, 8 up, 8 in 2026-03-10T08:39:22.090 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:21 vm06 ceph-mon[54477]: pgmap v175: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 345 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:39:22.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:21 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/3001188867' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:39:22.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:21 vm03 ceph-mon[57160]: osdmap e143: 8 total, 8 up, 8 in 2026-03-10T08:39:22.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:21 vm03 ceph-mon[57160]: pgmap v175: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 345 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:39:22.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:21 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/3001188867' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:39:22.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:21 vm03 ceph-mon[50703]: osdmap e143: 8 total, 8 up, 8 in 2026-03-10T08:39:22.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:21 vm03 ceph-mon[50703]: pgmap v175: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 345 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:39:22.758 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:39:22 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a[78511]: debug there is no tcmu-runner data available 2026-03-10T08:39:23.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:22 vm06 ceph-mon[54477]: osdmap e144: 8 total, 8 up, 8 in 2026-03-10T08:39:23.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:22 vm06 ceph-mon[54477]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:39:23.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:22 vm03 ceph-mon[57160]: osdmap e144: 8 total, 8 up, 8 in 2026-03-10T08:39:23.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:22 vm03 ceph-mon[57160]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:39:23.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:22 vm03 ceph-mon[50703]: osdmap e144: 8 total, 8 up, 8 in 2026-03-10T08:39:23.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:22 vm03 ceph-mon[50703]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:39:24.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:23 vm06 ceph-mon[54477]: osdmap e145: 8 total, 8 up, 8 in 2026-03-10T08:39:24.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:23 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/4209497256' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:39:24.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:23 vm06 ceph-mon[54477]: pgmap v178: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 345 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:24.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:23 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/4209497256' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:39:24.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:23 vm06 ceph-mon[54477]: osdmap e146: 8 total, 8 up, 8 in 2026-03-10T08:39:24.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:23 vm03 ceph-mon[57160]: osdmap e145: 8 total, 8 up, 8 in 2026-03-10T08:39:24.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:23 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/4209497256' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:39:24.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:23 vm03 ceph-mon[57160]: pgmap v178: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 345 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:24.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:23 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/4209497256' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:39:24.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:23 vm03 ceph-mon[57160]: osdmap e146: 8 total, 8 up, 8 in 2026-03-10T08:39:24.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:23 vm03 ceph-mon[50703]: osdmap e145: 8 total, 8 up, 8 in 2026-03-10T08:39:24.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:23 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/4209497256' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:39:24.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:23 vm03 ceph-mon[50703]: pgmap v178: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 345 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:24.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:23 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/4209497256' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:39:24.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:23 vm03 ceph-mon[50703]: osdmap e146: 8 total, 8 up, 8 in 2026-03-10T08:39:24.748 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_get_pool_name PASSED [ 40%] 2026-03-10T08:39:26.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:25 vm06 ceph-mon[54477]: osdmap e147: 8 total, 8 up, 8 in 2026-03-10T08:39:26.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:25 vm06 ceph-mon[54477]: pgmap v181: 164 pgs: 164 active+clean; 455 KiB data, 345 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:26.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:25 vm03 ceph-mon[57160]: osdmap e147: 8 total, 8 up, 8 in 2026-03-10T08:39:26.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:25 vm03 ceph-mon[57160]: pgmap v181: 164 pgs: 164 active+clean; 455 KiB data, 345 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:26.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:25 vm03 ceph-mon[50703]: osdmap e147: 8 total, 8 up, 8 in 2026-03-10T08:39:26.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:25 vm03 ceph-mon[50703]: pgmap v181: 164 pgs: 164 active+clean; 455 KiB data, 345 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:27.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:26 vm06 ceph-mon[54477]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:39:27.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:26 vm06 ceph-mon[54477]: osdmap e148: 8 total, 8 up, 8 in 2026-03-10T08:39:27.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:26 vm03 ceph-mon[57160]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:39:27.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:26 vm03 ceph-mon[57160]: osdmap e148: 8 total, 8 up, 8 in 2026-03-10T08:39:27.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:26 vm03 ceph-mon[50703]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:39:27.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:26 vm03 ceph-mon[50703]: osdmap e148: 8 total, 8 up, 8 in 2026-03-10T08:39:28.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:27 vm03 ceph-mon[57160]: osdmap e149: 8 total, 8 up, 8 in 2026-03-10T08:39:28.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:27 vm03 ceph-mon[57160]: pgmap v184: 196 pgs: 196 active+clean; 455 KiB data, 346 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:28.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:27 vm03 ceph-mon[50703]: osdmap e149: 8 total, 8 up, 8 in 2026-03-10T08:39:28.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:27 vm03 ceph-mon[50703]: pgmap v184: 196 pgs: 196 active+clean; 455 KiB data, 346 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:28.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:27 vm06 ceph-mon[54477]: osdmap e149: 8 total, 8 up, 8 in 2026-03-10T08:39:28.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:27 vm06 ceph-mon[54477]: pgmap v184: 196 pgs: 196 active+clean; 455 KiB data, 346 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:29.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:28 vm03 ceph-mon[57160]: osdmap e150: 8 total, 8 up, 8 in 2026-03-10T08:39:29.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:28 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/3735979981' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:39:29.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:28 vm03 ceph-mon[57160]: from='client.24811 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:39:29.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:28 vm03 ceph-mon[50703]: osdmap e150: 8 total, 8 up, 8 in 2026-03-10T08:39:29.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:28 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/3735979981' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:39:29.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:28 vm03 ceph-mon[50703]: from='client.24811 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:39:29.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:28 vm06 ceph-mon[54477]: osdmap e150: 8 total, 8 up, 8 in 2026-03-10T08:39:29.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:28 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/3735979981' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:39:29.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:28 vm06 ceph-mon[54477]: from='client.24811 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:39:29.883 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_create_snap PASSED [ 41%] 2026-03-10T08:39:29.894 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:39:29 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: ::ffff:192.168.123.106 - - [10/Mar/2026:08:39:29] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T08:39:30.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:29 vm03 ceph-mon[57160]: from='client.24811 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:39:30.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:29 vm03 ceph-mon[57160]: osdmap e151: 8 total, 8 up, 8 in 2026-03-10T08:39:30.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:29 vm03 ceph-mon[57160]: pgmap v187: 196 pgs: 196 active+clean; 455 KiB data, 346 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:30.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:29 vm03 ceph-mon[57160]: osdmap e152: 8 total, 8 up, 8 in 2026-03-10T08:39:30.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:29 vm03 ceph-mon[50703]: from='client.24811 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:39:30.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:29 vm03 ceph-mon[50703]: osdmap e151: 8 total, 8 up, 8 in 2026-03-10T08:39:30.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:29 vm03 ceph-mon[50703]: pgmap v187: 196 pgs: 196 active+clean; 455 KiB data, 346 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:30.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:29 vm03 ceph-mon[50703]: osdmap e152: 8 total, 8 up, 8 in 2026-03-10T08:39:30.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:29 vm06 ceph-mon[54477]: from='client.24811 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:39:30.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:29 vm06 ceph-mon[54477]: osdmap e151: 8 total, 8 up, 8 in 2026-03-10T08:39:30.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:29 vm06 ceph-mon[54477]: pgmap v187: 196 pgs: 196 active+clean; 455 KiB data, 346 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:30.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:29 vm06 ceph-mon[54477]: osdmap e152: 8 total, 8 up, 8 in 2026-03-10T08:39:32.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:31 vm03 ceph-mon[57160]: osdmap e153: 8 total, 8 up, 8 in 2026-03-10T08:39:32.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:31 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/322732871' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:39:32.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:31 vm03 ceph-mon[57160]: pgmap v190: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 346 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:39:32.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:31 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:39:32.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:31 vm03 ceph-mon[57160]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:39:32.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:31 vm03 ceph-mon[50703]: osdmap e153: 8 total, 8 up, 8 in 2026-03-10T08:39:32.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:31 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/322732871' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:39:32.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:31 vm03 ceph-mon[50703]: pgmap v190: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 346 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:39:32.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:31 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:39:32.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:31 vm03 ceph-mon[50703]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:39:32.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:31 vm06 ceph-mon[54477]: osdmap e153: 8 total, 8 up, 8 in 2026-03-10T08:39:32.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:31 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/322732871' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:39:32.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:31 vm06 ceph-mon[54477]: pgmap v190: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 346 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:39:32.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:31 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:39:32.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:31 vm06 ceph-mon[54477]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:39:32.839 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:39:32 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a[78511]: debug there is no tcmu-runner data available 2026-03-10T08:39:32.920 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_list_snaps_empty PASSED [ 42%] 2026-03-10T08:39:33.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:32 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/322732871' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:39:33.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:32 vm03 ceph-mon[57160]: osdmap e154: 8 total, 8 up, 8 in 2026-03-10T08:39:33.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:32 vm03 ceph-mon[57160]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:39:33.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:32 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/322732871' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:39:33.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:32 vm03 ceph-mon[50703]: osdmap e154: 8 total, 8 up, 8 in 2026-03-10T08:39:33.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:32 vm03 ceph-mon[50703]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:39:33.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:32 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/322732871' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:39:33.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:32 vm06 ceph-mon[54477]: osdmap e154: 8 total, 8 up, 8 in 2026-03-10T08:39:33.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:32 vm06 ceph-mon[54477]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:39:34.231 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:33 vm03 ceph-mon[57160]: osdmap e155: 8 total, 8 up, 8 in 2026-03-10T08:39:34.231 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:33 vm03 ceph-mon[57160]: pgmap v193: 164 pgs: 164 active+clean; 455 KiB data, 346 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:34.231 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:33 vm03 ceph-mon[50703]: osdmap e155: 8 total, 8 up, 8 in 2026-03-10T08:39:34.231 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:33 vm03 ceph-mon[50703]: pgmap v193: 164 pgs: 164 active+clean; 455 KiB data, 346 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:34.241 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:33 vm06 ceph-mon[54477]: osdmap e155: 8 total, 8 up, 8 in 2026-03-10T08:39:34.241 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:33 vm06 ceph-mon[54477]: pgmap v193: 164 pgs: 164 active+clean; 455 KiB data, 346 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:35.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:34 vm03 ceph-mon[57160]: osdmap e156: 8 total, 8 up, 8 in 2026-03-10T08:39:35.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:34 vm03 ceph-mon[50703]: osdmap e156: 8 total, 8 up, 8 in 2026-03-10T08:39:35.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:34 vm06 ceph-mon[54477]: osdmap e156: 8 total, 8 up, 8 in 2026-03-10T08:39:36.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:35 vm06 ceph-mon[54477]: osdmap e157: 8 total, 8 up, 8 in 2026-03-10T08:39:36.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:35 vm06 ceph-mon[54477]: pgmap v196: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 346 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:36.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:35 vm03 ceph-mon[57160]: osdmap e157: 8 total, 8 up, 8 in 2026-03-10T08:39:36.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:35 vm03 ceph-mon[57160]: pgmap v196: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 346 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:36.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:35 vm03 ceph-mon[50703]: osdmap e157: 8 total, 8 up, 8 in 2026-03-10T08:39:36.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:35 vm03 ceph-mon[50703]: pgmap v196: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 346 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:37.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:36 vm06 ceph-mon[54477]: osdmap e158: 8 total, 8 up, 8 in 2026-03-10T08:39:37.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:36 vm03 ceph-mon[57160]: osdmap e158: 8 total, 8 up, 8 in 2026-03-10T08:39:37.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:36 vm03 ceph-mon[50703]: osdmap e158: 8 total, 8 up, 8 in 2026-03-10T08:39:38.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:37 vm06 ceph-mon[54477]: osdmap e159: 8 total, 8 up, 8 in 2026-03-10T08:39:38.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:37 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/1323986721' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:39:38.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:37 vm06 ceph-mon[54477]: pgmap v199: 196 pgs: 196 active+clean; 455 KiB data, 347 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:38.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:37 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/1323986721' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:39:38.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:37 vm06 ceph-mon[54477]: osdmap e160: 8 total, 8 up, 8 in 2026-03-10T08:39:38.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:37 vm03 ceph-mon[57160]: osdmap e159: 8 total, 8 up, 8 in 2026-03-10T08:39:38.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:37 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/1323986721' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:39:38.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:37 vm03 ceph-mon[57160]: pgmap v199: 196 pgs: 196 active+clean; 455 KiB data, 347 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:38.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:37 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/1323986721' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:39:38.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:37 vm03 ceph-mon[57160]: osdmap e160: 8 total, 8 up, 8 in 2026-03-10T08:39:38.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:37 vm03 ceph-mon[50703]: osdmap e159: 8 total, 8 up, 8 in 2026-03-10T08:39:38.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:37 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/1323986721' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:39:38.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:37 vm03 ceph-mon[50703]: pgmap v199: 196 pgs: 196 active+clean; 455 KiB data, 347 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:38.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:37 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/1323986721' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:39:38.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:37 vm03 ceph-mon[50703]: osdmap e160: 8 total, 8 up, 8 in 2026-03-10T08:39:38.972 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_list_snaps PASSED [ 43%] 2026-03-10T08:39:39.928 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:39:39 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: ::ffff:192.168.123.106 - - [10/Mar/2026:08:39:39] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T08:39:40.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:39 vm06 ceph-mon[54477]: osdmap e161: 8 total, 8 up, 8 in 2026-03-10T08:39:40.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:39 vm06 ceph-mon[54477]: pgmap v202: 164 pgs: 164 active+clean; 455 KiB data, 347 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:40.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:39 vm03 ceph-mon[57160]: osdmap e161: 8 total, 8 up, 8 in 2026-03-10T08:39:40.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:39 vm03 ceph-mon[57160]: pgmap v202: 164 pgs: 164 active+clean; 455 KiB data, 347 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:40.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:39 vm03 ceph-mon[50703]: osdmap e161: 8 total, 8 up, 8 in 2026-03-10T08:39:40.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:39 vm03 ceph-mon[50703]: pgmap v202: 164 pgs: 164 active+clean; 455 KiB data, 347 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:41.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:40 vm06 ceph-mon[54477]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:39:41.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:40 vm06 ceph-mon[54477]: osdmap e162: 8 total, 8 up, 8 in 2026-03-10T08:39:41.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:40 vm03 ceph-mon[57160]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:39:41.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:40 vm03 ceph-mon[57160]: osdmap e162: 8 total, 8 up, 8 in 2026-03-10T08:39:41.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:40 vm03 ceph-mon[50703]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:39:41.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:40 vm03 ceph-mon[50703]: osdmap e162: 8 total, 8 up, 8 in 2026-03-10T08:39:42.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:41 vm06 ceph-mon[54477]: osdmap e163: 8 total, 8 up, 8 in 2026-03-10T08:39:42.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:41 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/1036115360' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:39:42.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:41 vm06 ceph-mon[54477]: pgmap v205: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 347 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:39:42.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:41 vm03 ceph-mon[57160]: osdmap e163: 8 total, 8 up, 8 in 2026-03-10T08:39:42.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:41 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/1036115360' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:39:42.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:41 vm03 ceph-mon[57160]: pgmap v205: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 347 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:39:42.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:41 vm03 ceph-mon[50703]: osdmap e163: 8 total, 8 up, 8 in 2026-03-10T08:39:42.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:41 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/1036115360' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:39:42.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:41 vm03 ceph-mon[50703]: pgmap v205: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 347 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:39:42.839 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:39:42 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a[78511]: debug there is no tcmu-runner data available 2026-03-10T08:39:42.994 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_lookup_snap PASSED [ 45%] 2026-03-10T08:39:43.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:42 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/1036115360' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:39:43.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:42 vm06 ceph-mon[54477]: osdmap e164: 8 total, 8 up, 8 in 2026-03-10T08:39:43.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:42 vm06 ceph-mon[54477]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:39:43.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:42 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/1036115360' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:39:43.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:42 vm03 ceph-mon[57160]: osdmap e164: 8 total, 8 up, 8 in 2026-03-10T08:39:43.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:42 vm03 ceph-mon[57160]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:39:43.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:43 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/1036115360' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:39:43.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:43 vm03 ceph-mon[50703]: osdmap e164: 8 total, 8 up, 8 in 2026-03-10T08:39:43.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:43 vm03 ceph-mon[50703]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:39:44.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:44 vm06 ceph-mon[54477]: osdmap e165: 8 total, 8 up, 8 in 2026-03-10T08:39:44.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:44 vm06 ceph-mon[54477]: pgmap v208: 164 pgs: 164 active+clean; 455 KiB data, 347 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:44.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:44 vm03 ceph-mon[57160]: osdmap e165: 8 total, 8 up, 8 in 2026-03-10T08:39:44.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:44 vm03 ceph-mon[57160]: pgmap v208: 164 pgs: 164 active+clean; 455 KiB data, 347 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:44.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:44 vm03 ceph-mon[50703]: osdmap e165: 8 total, 8 up, 8 in 2026-03-10T08:39:44.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:44 vm03 ceph-mon[50703]: pgmap v208: 164 pgs: 164 active+clean; 455 KiB data, 347 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:45.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:45 vm03 ceph-mon[57160]: osdmap e166: 8 total, 8 up, 8 in 2026-03-10T08:39:45.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:45 vm03 ceph-mon[50703]: osdmap e166: 8 total, 8 up, 8 in 2026-03-10T08:39:45.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:45 vm06 ceph-mon[54477]: osdmap e166: 8 total, 8 up, 8 in 2026-03-10T08:39:46.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:46 vm06 ceph-mon[54477]: osdmap e167: 8 total, 8 up, 8 in 2026-03-10T08:39:46.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:46 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/1567057565' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:39:46.340 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:46 vm06 ceph-mon[54477]: from='client.24826 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:39:46.340 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:46 vm06 ceph-mon[54477]: pgmap v211: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 347 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:46.340 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:46 vm06 ceph-mon[54477]: from='client.24826 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:39:46.340 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:46 vm06 ceph-mon[54477]: osdmap e168: 8 total, 8 up, 8 in 2026-03-10T08:39:46.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:46 vm03 ceph-mon[57160]: osdmap e167: 8 total, 8 up, 8 in 2026-03-10T08:39:46.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:46 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/1567057565' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:39:46.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:46 vm03 ceph-mon[57160]: from='client.24826 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:39:46.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:46 vm03 ceph-mon[57160]: pgmap v211: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 347 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:46.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:46 vm03 ceph-mon[57160]: from='client.24826 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:39:46.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:46 vm03 ceph-mon[57160]: osdmap e168: 8 total, 8 up, 8 in 2026-03-10T08:39:46.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:46 vm03 ceph-mon[50703]: osdmap e167: 8 total, 8 up, 8 in 2026-03-10T08:39:46.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:46 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/1567057565' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:39:46.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:46 vm03 ceph-mon[50703]: from='client.24826 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:39:46.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:46 vm03 ceph-mon[50703]: pgmap v211: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 347 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:46.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:46 vm03 ceph-mon[50703]: from='client.24826 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:39:46.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:46 vm03 ceph-mon[50703]: osdmap e168: 8 total, 8 up, 8 in 2026-03-10T08:39:47.050 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_snap_timestamp PASSED [ 46%] 2026-03-10T08:39:47.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:47 vm06 ceph-mon[54477]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:39:47.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:47 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:39:47.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:47 vm03 ceph-mon[57160]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:39:47.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:47 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:39:47.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:47 vm03 ceph-mon[50703]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:39:47.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:47 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:39:48.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:48 vm03 ceph-mon[57160]: osdmap e169: 8 total, 8 up, 8 in 2026-03-10T08:39:48.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:48 vm03 ceph-mon[57160]: pgmap v214: 164 pgs: 164 active+clean; 455 KiB data, 348 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:48.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:48 vm03 ceph-mon[50703]: osdmap e169: 8 total, 8 up, 8 in 2026-03-10T08:39:48.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:48 vm03 ceph-mon[50703]: pgmap v214: 164 pgs: 164 active+clean; 455 KiB data, 348 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:48.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:48 vm06 ceph-mon[54477]: osdmap e169: 8 total, 8 up, 8 in 2026-03-10T08:39:48.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:48 vm06 ceph-mon[54477]: pgmap v214: 164 pgs: 164 active+clean; 455 KiB data, 348 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:49.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:49 vm03 ceph-mon[57160]: osdmap e170: 8 total, 8 up, 8 in 2026-03-10T08:39:49.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:49 vm03 ceph-mon[50703]: osdmap e170: 8 total, 8 up, 8 in 2026-03-10T08:39:49.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:49 vm06 ceph-mon[54477]: osdmap e170: 8 total, 8 up, 8 in 2026-03-10T08:39:49.928 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:39:49 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: ::ffff:192.168.123.106 - - [10/Mar/2026:08:39:49] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T08:39:50.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:50 vm03 ceph-mon[57160]: osdmap e171: 8 total, 8 up, 8 in 2026-03-10T08:39:50.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:50 vm03 ceph-mon[57160]: pgmap v217: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 348 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:50.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:50 vm03 ceph-mon[57160]: osdmap e172: 8 total, 8 up, 8 in 2026-03-10T08:39:50.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:50 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/2783675572' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:39:50.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:50 vm03 ceph-mon[50703]: osdmap e171: 8 total, 8 up, 8 in 2026-03-10T08:39:50.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:50 vm03 ceph-mon[50703]: pgmap v217: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 348 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:50.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:50 vm03 ceph-mon[50703]: osdmap e172: 8 total, 8 up, 8 in 2026-03-10T08:39:50.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:50 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/2783675572' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:39:50.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:50 vm06 ceph-mon[54477]: osdmap e171: 8 total, 8 up, 8 in 2026-03-10T08:39:50.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:50 vm06 ceph-mon[54477]: pgmap v217: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 348 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:50.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:50 vm06 ceph-mon[54477]: osdmap e172: 8 total, 8 up, 8 in 2026-03-10T08:39:50.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:50 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/2783675572' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:39:52.219 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_remove_snap PASSED [ 47%] 2026-03-10T08:39:52.589 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:39:52 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a[78511]: debug there is no tcmu-runner data available 2026-03-10T08:39:52.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:52 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/2783675572' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:39:52.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:52 vm06 ceph-mon[54477]: osdmap e173: 8 total, 8 up, 8 in 2026-03-10T08:39:52.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:52 vm06 ceph-mon[54477]: pgmap v220: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 348 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:39:52.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:52 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/2783675572' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:39:52.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:52 vm03 ceph-mon[57160]: osdmap e173: 8 total, 8 up, 8 in 2026-03-10T08:39:52.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:52 vm03 ceph-mon[57160]: pgmap v220: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 348 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:39:52.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:52 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/2783675572' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:39:52.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:52 vm03 ceph-mon[50703]: osdmap e173: 8 total, 8 up, 8 in 2026-03-10T08:39:52.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:52 vm03 ceph-mon[50703]: pgmap v220: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 348 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:39:53.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:53 vm06 ceph-mon[54477]: osdmap e174: 8 total, 8 up, 8 in 2026-03-10T08:39:53.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:53 vm06 ceph-mon[54477]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:39:53.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:53 vm03 ceph-mon[57160]: osdmap e174: 8 total, 8 up, 8 in 2026-03-10T08:39:53.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:53 vm03 ceph-mon[57160]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:39:53.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:53 vm03 ceph-mon[50703]: osdmap e174: 8 total, 8 up, 8 in 2026-03-10T08:39:53.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:53 vm03 ceph-mon[50703]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:39:54.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:54 vm06 ceph-mon[54477]: osdmap e175: 8 total, 8 up, 8 in 2026-03-10T08:39:54.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:54 vm06 ceph-mon[54477]: pgmap v223: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 348 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:54.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:54 vm03 ceph-mon[57160]: osdmap e175: 8 total, 8 up, 8 in 2026-03-10T08:39:54.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:54 vm03 ceph-mon[57160]: pgmap v223: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 348 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:54.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:54 vm03 ceph-mon[50703]: osdmap e175: 8 total, 8 up, 8 in 2026-03-10T08:39:54.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:54 vm03 ceph-mon[50703]: pgmap v223: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 348 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:55.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:55 vm06 ceph-mon[54477]: osdmap e176: 8 total, 8 up, 8 in 2026-03-10T08:39:55.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:55 vm03 ceph-mon[57160]: osdmap e176: 8 total, 8 up, 8 in 2026-03-10T08:39:55.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:55 vm03 ceph-mon[50703]: osdmap e176: 8 total, 8 up, 8 in 2026-03-10T08:39:56.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:56 vm06 ceph-mon[54477]: osdmap e177: 8 total, 8 up, 8 in 2026-03-10T08:39:56.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:56 vm06 ceph-mon[54477]: pgmap v226: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 348 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:56.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:56 vm03 ceph-mon[50703]: osdmap e177: 8 total, 8 up, 8 in 2026-03-10T08:39:56.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:56 vm03 ceph-mon[50703]: pgmap v226: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 348 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:56.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:56 vm03 ceph-mon[57160]: osdmap e177: 8 total, 8 up, 8 in 2026-03-10T08:39:56.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:56 vm03 ceph-mon[57160]: pgmap v226: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 348 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:39:57.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:57 vm06 ceph-mon[54477]: osdmap e178: 8 total, 8 up, 8 in 2026-03-10T08:39:57.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:57 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/2066123421' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:39:57.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:57 vm06 ceph-mon[54477]: from='client.24835 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:39:57.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:57 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:39:57.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:57 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:39:57.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:57 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:39:57.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:57 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:39:57.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:57 vm03 ceph-mon[57160]: osdmap e178: 8 total, 8 up, 8 in 2026-03-10T08:39:57.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:57 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/2066123421' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:39:57.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:57 vm03 ceph-mon[57160]: from='client.24835 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:39:57.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:57 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:39:57.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:57 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:39:57.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:57 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:39:57.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:57 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:39:57.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:57 vm03 ceph-mon[50703]: osdmap e178: 8 total, 8 up, 8 in 2026-03-10T08:39:57.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:57 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/2066123421' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:39:57.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:57 vm03 ceph-mon[50703]: from='client.24835 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:39:57.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:57 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:39:57.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:57 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:39:57.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:57 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:39:57.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:57 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:39:58.288 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_snap_rollback PASSED [ 48%] 2026-03-10T08:39:58.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:58 vm06 ceph-mon[54477]: from='client.24835 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:39:58.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:58 vm06 ceph-mon[54477]: osdmap e179: 8 total, 8 up, 8 in 2026-03-10T08:39:58.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:58 vm06 ceph-mon[54477]: pgmap v229: 196 pgs: 196 active+clean; 455 KiB data, 349 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T08:39:58.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:58 vm03 ceph-mon[57160]: from='client.24835 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:39:58.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:58 vm03 ceph-mon[57160]: osdmap e179: 8 total, 8 up, 8 in 2026-03-10T08:39:58.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:58 vm03 ceph-mon[57160]: pgmap v229: 196 pgs: 196 active+clean; 455 KiB data, 349 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T08:39:58.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:58 vm03 ceph-mon[50703]: from='client.24835 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:39:58.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:58 vm03 ceph-mon[50703]: osdmap e179: 8 total, 8 up, 8 in 2026-03-10T08:39:58.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:58 vm03 ceph-mon[50703]: pgmap v229: 196 pgs: 196 active+clean; 455 KiB data, 349 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T08:39:59.527 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:39:59 vm03 ceph-mon[57160]: osdmap e180: 8 total, 8 up, 8 in 2026-03-10T08:39:59.527 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:39:59 vm03 ceph-mon[50703]: osdmap e180: 8 total, 8 up, 8 in 2026-03-10T08:39:59.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:39:59 vm06 ceph-mon[54477]: osdmap e180: 8 total, 8 up, 8 in 2026-03-10T08:39:59.928 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:39:59 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: ::ffff:192.168.123.106 - - [10/Mar/2026:08:39:59] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T08:40:00.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:00 vm06 ceph-mon[54477]: osdmap e181: 8 total, 8 up, 8 in 2026-03-10T08:40:00.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:00 vm06 ceph-mon[54477]: pgmap v232: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 349 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:40:00.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:00 vm06 ceph-mon[54477]: Health detail: HEALTH_WARN 2 pool(s) do not have an application enabled 2026-03-10T08:40:00.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:00 vm06 ceph-mon[54477]: [WRN] POOL_APP_NOT_ENABLED: 2 pool(s) do not have an application enabled 2026-03-10T08:40:00.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:00 vm06 ceph-mon[54477]: application not enabled on pool 'rbd' 2026-03-10T08:40:00.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:00 vm06 ceph-mon[54477]: application not enabled on pool 'test_pool' 2026-03-10T08:40:00.590 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:00 vm06 ceph-mon[54477]: use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-10T08:40:00.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:00 vm03 ceph-mon[57160]: osdmap e181: 8 total, 8 up, 8 in 2026-03-10T08:40:00.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:00 vm03 ceph-mon[57160]: pgmap v232: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 349 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:40:00.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:00 vm03 ceph-mon[57160]: Health detail: HEALTH_WARN 2 pool(s) do not have an application enabled 2026-03-10T08:40:00.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:00 vm03 ceph-mon[57160]: [WRN] POOL_APP_NOT_ENABLED: 2 pool(s) do not have an application enabled 2026-03-10T08:40:00.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:00 vm03 ceph-mon[57160]: application not enabled on pool 'rbd' 2026-03-10T08:40:00.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:00 vm03 ceph-mon[57160]: application not enabled on pool 'test_pool' 2026-03-10T08:40:00.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:00 vm03 ceph-mon[57160]: use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-10T08:40:00.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:00 vm03 ceph-mon[50703]: osdmap e181: 8 total, 8 up, 8 in 2026-03-10T08:40:00.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:00 vm03 ceph-mon[50703]: pgmap v232: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 349 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:40:00.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:00 vm03 ceph-mon[50703]: Health detail: HEALTH_WARN 2 pool(s) do not have an application enabled 2026-03-10T08:40:00.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:00 vm03 ceph-mon[50703]: [WRN] POOL_APP_NOT_ENABLED: 2 pool(s) do not have an application enabled 2026-03-10T08:40:00.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:00 vm03 ceph-mon[50703]: application not enabled on pool 'rbd' 2026-03-10T08:40:00.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:00 vm03 ceph-mon[50703]: application not enabled on pool 'test_pool' 2026-03-10T08:40:00.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:00 vm03 ceph-mon[50703]: use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-10T08:40:01.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:01 vm06 ceph-mon[54477]: osdmap e182: 8 total, 8 up, 8 in 2026-03-10T08:40:01.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:01 vm03 ceph-mon[57160]: osdmap e182: 8 total, 8 up, 8 in 2026-03-10T08:40:01.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:01 vm03 ceph-mon[50703]: osdmap e182: 8 total, 8 up, 8 in 2026-03-10T08:40:02.589 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:40:02 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a[78511]: debug there is no tcmu-runner data available 2026-03-10T08:40:02.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:02 vm06 ceph-mon[54477]: osdmap e183: 8 total, 8 up, 8 in 2026-03-10T08:40:02.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:02 vm06 ceph-mon[54477]: pgmap v235: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 349 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:40:02.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:02 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:40:02.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:02 vm03 ceph-mon[57160]: osdmap e183: 8 total, 8 up, 8 in 2026-03-10T08:40:02.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:02 vm03 ceph-mon[57160]: pgmap v235: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 349 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:40:02.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:02 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:40:02.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:02 vm03 ceph-mon[50703]: osdmap e183: 8 total, 8 up, 8 in 2026-03-10T08:40:02.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:02 vm03 ceph-mon[50703]: pgmap v235: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 349 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:40:02.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:02 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:40:03.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:03 vm06 ceph-mon[54477]: osdmap e184: 8 total, 8 up, 8 in 2026-03-10T08:40:03.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:03 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/3704981260' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:03.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:03 vm06 ceph-mon[54477]: from='client.24806 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:03.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:03 vm03 ceph-mon[57160]: osdmap e184: 8 total, 8 up, 8 in 2026-03-10T08:40:03.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:03 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/3704981260' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:03.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:03 vm03 ceph-mon[57160]: from='client.24806 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:03.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:03 vm03 ceph-mon[50703]: osdmap e184: 8 total, 8 up, 8 in 2026-03-10T08:40:03.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:03 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/3704981260' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:03.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:03 vm03 ceph-mon[50703]: from='client.24806 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:04.337 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_snap_rollback_removed PASSED [ 49%] 2026-03-10T08:40:04.640 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:04 vm06 ceph-mon[54477]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:40:04.640 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:04 vm06 ceph-mon[54477]: pgmap v237: 196 pgs: 196 active+clean; 455 KiB data, 349 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 506 B/s wr, 1 op/s 2026-03-10T08:40:04.640 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:04 vm06 ceph-mon[54477]: from='client.24806 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:40:04.640 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:04 vm06 ceph-mon[54477]: osdmap e185: 8 total, 8 up, 8 in 2026-03-10T08:40:04.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:04 vm03 ceph-mon[57160]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:40:04.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:04 vm03 ceph-mon[57160]: pgmap v237: 196 pgs: 196 active+clean; 455 KiB data, 349 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 506 B/s wr, 1 op/s 2026-03-10T08:40:04.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:04 vm03 ceph-mon[57160]: from='client.24806 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:40:04.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:04 vm03 ceph-mon[57160]: osdmap e185: 8 total, 8 up, 8 in 2026-03-10T08:40:04.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:04 vm03 ceph-mon[50703]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:40:04.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:04 vm03 ceph-mon[50703]: pgmap v237: 196 pgs: 196 active+clean; 455 KiB data, 349 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 506 B/s wr, 1 op/s 2026-03-10T08:40:04.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:04 vm03 ceph-mon[50703]: from='client.24806 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:40:04.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:04 vm03 ceph-mon[50703]: osdmap e185: 8 total, 8 up, 8 in 2026-03-10T08:40:05.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:05 vm06 ceph-mon[54477]: osdmap e186: 8 total, 8 up, 8 in 2026-03-10T08:40:05.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:05 vm03 ceph-mon[57160]: osdmap e186: 8 total, 8 up, 8 in 2026-03-10T08:40:05.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:05 vm03 ceph-mon[50703]: osdmap e186: 8 total, 8 up, 8 in 2026-03-10T08:40:06.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:06 vm03 ceph-mon[57160]: pgmap v240: 164 pgs: 164 active+clean; 455 KiB data, 349 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:40:06.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:06 vm03 ceph-mon[57160]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:40:06.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:06 vm03 ceph-mon[57160]: osdmap e187: 8 total, 8 up, 8 in 2026-03-10T08:40:06.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:06 vm03 ceph-mon[50703]: pgmap v240: 164 pgs: 164 active+clean; 455 KiB data, 349 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:40:06.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:06 vm03 ceph-mon[50703]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:40:06.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:06 vm03 ceph-mon[50703]: osdmap e187: 8 total, 8 up, 8 in 2026-03-10T08:40:06.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:06 vm06 ceph-mon[54477]: pgmap v240: 164 pgs: 164 active+clean; 455 KiB data, 349 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:40:06.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:06 vm06 ceph-mon[54477]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:40:06.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:06 vm06 ceph-mon[54477]: osdmap e187: 8 total, 8 up, 8 in 2026-03-10T08:40:07.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:07 vm03 ceph-mon[57160]: osdmap e188: 8 total, 8 up, 8 in 2026-03-10T08:40:07.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:07 vm03 ceph-mon[50703]: osdmap e188: 8 total, 8 up, 8 in 2026-03-10T08:40:07.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:07 vm06 ceph-mon[54477]: osdmap e188: 8 total, 8 up, 8 in 2026-03-10T08:40:08.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:08 vm03 ceph-mon[57160]: pgmap v243: 196 pgs: 196 active+clean; 455 KiB data, 385 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T08:40:08.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:08 vm03 ceph-mon[57160]: osdmap e189: 8 total, 8 up, 8 in 2026-03-10T08:40:08.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:08 vm03 ceph-mon[50703]: pgmap v243: 196 pgs: 196 active+clean; 455 KiB data, 385 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T08:40:08.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:08 vm03 ceph-mon[50703]: osdmap e189: 8 total, 8 up, 8 in 2026-03-10T08:40:08.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:08 vm06 ceph-mon[54477]: pgmap v243: 196 pgs: 196 active+clean; 455 KiB data, 385 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T08:40:08.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:08 vm06 ceph-mon[54477]: osdmap e189: 8 total, 8 up, 8 in 2026-03-10T08:40:09.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:09 vm03 ceph-mon[57160]: osdmap e190: 8 total, 8 up, 8 in 2026-03-10T08:40:09.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:09 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/1315343575' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:09.678 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:40:09 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: ::ffff:192.168.123.106 - - [10/Mar/2026:08:40:09] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T08:40:09.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:09 vm03 ceph-mon[50703]: osdmap e190: 8 total, 8 up, 8 in 2026-03-10T08:40:09.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:09 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/1315343575' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:09.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:09 vm06 ceph-mon[54477]: osdmap e190: 8 total, 8 up, 8 in 2026-03-10T08:40:09.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:09 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/1315343575' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:10.415 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_snap_read PASSED [ 50%] 2026-03-10T08:40:10.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:10 vm03 ceph-mon[57160]: pgmap v246: 196 pgs: 196 active+clean; 455 KiB data, 385 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T08:40:10.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:10 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/1315343575' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:40:10.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:10 vm03 ceph-mon[57160]: osdmap e191: 8 total, 8 up, 8 in 2026-03-10T08:40:10.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:10 vm03 ceph-mon[50703]: pgmap v246: 196 pgs: 196 active+clean; 455 KiB data, 385 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T08:40:10.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:10 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/1315343575' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:40:10.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:10 vm03 ceph-mon[50703]: osdmap e191: 8 total, 8 up, 8 in 2026-03-10T08:40:10.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:10 vm06 ceph-mon[54477]: pgmap v246: 196 pgs: 196 active+clean; 455 KiB data, 385 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T08:40:10.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:10 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/1315343575' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:40:10.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:10 vm06 ceph-mon[54477]: osdmap e191: 8 total, 8 up, 8 in 2026-03-10T08:40:11.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:11 vm03 ceph-mon[57160]: osdmap e192: 8 total, 8 up, 8 in 2026-03-10T08:40:11.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:11 vm03 ceph-mon[50703]: osdmap e192: 8 total, 8 up, 8 in 2026-03-10T08:40:11.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:11 vm06 ceph-mon[54477]: osdmap e192: 8 total, 8 up, 8 in 2026-03-10T08:40:12.839 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:40:12 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a[78511]: debug there is no tcmu-runner data available 2026-03-10T08:40:12.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:12 vm06 ceph-mon[54477]: pgmap v249: 164 pgs: 164 active+clean; 455 KiB data, 385 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:40:12.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:12 vm06 ceph-mon[54477]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:40:12.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:12 vm06 ceph-mon[54477]: osdmap e193: 8 total, 8 up, 8 in 2026-03-10T08:40:12.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:12 vm03 ceph-mon[57160]: pgmap v249: 164 pgs: 164 active+clean; 455 KiB data, 385 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:40:12.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:12 vm03 ceph-mon[57160]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:40:12.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:12 vm03 ceph-mon[57160]: osdmap e193: 8 total, 8 up, 8 in 2026-03-10T08:40:12.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:12 vm03 ceph-mon[50703]: pgmap v249: 164 pgs: 164 active+clean; 455 KiB data, 385 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:40:12.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:12 vm03 ceph-mon[50703]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:40:12.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:12 vm03 ceph-mon[50703]: osdmap e193: 8 total, 8 up, 8 in 2026-03-10T08:40:13.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:13 vm06 ceph-mon[54477]: osdmap e194: 8 total, 8 up, 8 in 2026-03-10T08:40:13.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:13 vm06 ceph-mon[54477]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:40:13.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:13 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/3457720042' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:13.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:13 vm06 ceph-mon[54477]: from='client.24818 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:13.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:13 vm06 ceph-mon[54477]: pgmap v252: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 407 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:40:13.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:13 vm06 ceph-mon[54477]: from='client.24818 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:40:13.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:13 vm06 ceph-mon[54477]: osdmap e195: 8 total, 8 up, 8 in 2026-03-10T08:40:13.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:13 vm03 ceph-mon[57160]: osdmap e194: 8 total, 8 up, 8 in 2026-03-10T08:40:13.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:13 vm03 ceph-mon[57160]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:40:13.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:13 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/3457720042' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:13.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:13 vm03 ceph-mon[57160]: from='client.24818 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:13.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:13 vm03 ceph-mon[57160]: pgmap v252: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 407 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:40:13.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:13 vm03 ceph-mon[57160]: from='client.24818 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:40:13.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:13 vm03 ceph-mon[57160]: osdmap e195: 8 total, 8 up, 8 in 2026-03-10T08:40:13.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:13 vm03 ceph-mon[50703]: osdmap e194: 8 total, 8 up, 8 in 2026-03-10T08:40:13.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:13 vm03 ceph-mon[50703]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:40:13.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:13 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/3457720042' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:13.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:13 vm03 ceph-mon[50703]: from='client.24818 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:13.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:13 vm03 ceph-mon[50703]: pgmap v252: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 407 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:40:13.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:13 vm03 ceph-mon[50703]: from='client.24818 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:40:13.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:13 vm03 ceph-mon[50703]: osdmap e195: 8 total, 8 up, 8 in 2026-03-10T08:40:14.512 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_set_omap PASSED [ 51%] 2026-03-10T08:40:15.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:15 vm06 ceph-mon[54477]: osdmap e196: 8 total, 8 up, 8 in 2026-03-10T08:40:15.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:15 vm06 ceph-mon[54477]: pgmap v255: 164 pgs: 164 active+clean; 455 KiB data, 407 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:40:15.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:15 vm03 ceph-mon[57160]: osdmap e196: 8 total, 8 up, 8 in 2026-03-10T08:40:15.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:15 vm03 ceph-mon[57160]: pgmap v255: 164 pgs: 164 active+clean; 455 KiB data, 407 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:40:15.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:15 vm03 ceph-mon[50703]: osdmap e196: 8 total, 8 up, 8 in 2026-03-10T08:40:15.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:15 vm03 ceph-mon[50703]: pgmap v255: 164 pgs: 164 active+clean; 455 KiB data, 407 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:40:16.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:16 vm06 ceph-mon[54477]: osdmap e197: 8 total, 8 up, 8 in 2026-03-10T08:40:16.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:16 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:40:16.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:16 vm03 ceph-mon[57160]: osdmap e197: 8 total, 8 up, 8 in 2026-03-10T08:40:16.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:16 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:40:16.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:16 vm03 ceph-mon[50703]: osdmap e197: 8 total, 8 up, 8 in 2026-03-10T08:40:16.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:16 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:40:17.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:17 vm06 ceph-mon[54477]: osdmap e198: 8 total, 8 up, 8 in 2026-03-10T08:40:17.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:17 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/1359132251' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:17.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:17 vm06 ceph-mon[54477]: from='client.24853 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:17.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:17 vm06 ceph-mon[54477]: pgmap v258: 196 pgs: 196 active+clean; 455 KiB data, 425 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T08:40:17.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:17 vm03 ceph-mon[57160]: osdmap e198: 8 total, 8 up, 8 in 2026-03-10T08:40:17.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:17 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/1359132251' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:17.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:17 vm03 ceph-mon[57160]: from='client.24853 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:17.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:17 vm03 ceph-mon[57160]: pgmap v258: 196 pgs: 196 active+clean; 455 KiB data, 425 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T08:40:17.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:17 vm03 ceph-mon[50703]: osdmap e198: 8 total, 8 up, 8 in 2026-03-10T08:40:17.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:17 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/1359132251' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:17.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:17 vm03 ceph-mon[50703]: from='client.24853 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:17.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:17 vm03 ceph-mon[50703]: pgmap v258: 196 pgs: 196 active+clean; 455 KiB data, 425 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T08:40:18.613 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_set_omap_aio PASSED [ 52%] 2026-03-10T08:40:18.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:18 vm03 ceph-mon[50703]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:40:18.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:18 vm03 ceph-mon[50703]: from='client.24853 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:40:18.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:18 vm03 ceph-mon[50703]: osdmap e199: 8 total, 8 up, 8 in 2026-03-10T08:40:18.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:18 vm03 ceph-mon[57160]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:40:18.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:18 vm03 ceph-mon[57160]: from='client.24853 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:40:18.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:18 vm03 ceph-mon[57160]: osdmap e199: 8 total, 8 up, 8 in 2026-03-10T08:40:19.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:18 vm06 ceph-mon[54477]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:40:19.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:18 vm06 ceph-mon[54477]: from='client.24853 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:40:19.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:18 vm06 ceph-mon[54477]: osdmap e199: 8 total, 8 up, 8 in 2026-03-10T08:40:19.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:19 vm03 ceph-mon[57160]: osdmap e200: 8 total, 8 up, 8 in 2026-03-10T08:40:19.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:19 vm03 ceph-mon[57160]: pgmap v261: 164 pgs: 164 active+clean; 455 KiB data, 425 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:40:19.928 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:40:19 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: ::ffff:192.168.123.106 - - [10/Mar/2026:08:40:19] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T08:40:19.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:19 vm03 ceph-mon[50703]: osdmap e200: 8 total, 8 up, 8 in 2026-03-10T08:40:19.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:19 vm03 ceph-mon[50703]: pgmap v261: 164 pgs: 164 active+clean; 455 KiB data, 425 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:40:20.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:19 vm06 ceph-mon[54477]: osdmap e200: 8 total, 8 up, 8 in 2026-03-10T08:40:20.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:19 vm06 ceph-mon[54477]: pgmap v261: 164 pgs: 164 active+clean; 455 KiB data, 425 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:40:20.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:20 vm03 ceph-mon[57160]: osdmap e201: 8 total, 8 up, 8 in 2026-03-10T08:40:20.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:20 vm03 ceph-mon[50703]: osdmap e201: 8 total, 8 up, 8 in 2026-03-10T08:40:21.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:20 vm06 ceph-mon[54477]: osdmap e201: 8 total, 8 up, 8 in 2026-03-10T08:40:22.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:21 vm06 ceph-mon[54477]: osdmap e202: 8 total, 8 up, 8 in 2026-03-10T08:40:22.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:21 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/1558637742' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:22.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:21 vm06 ceph-mon[54477]: from='client.24830 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:22.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:21 vm06 ceph-mon[54477]: pgmap v264: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 425 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:40:22.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:21 vm03 ceph-mon[57160]: osdmap e202: 8 total, 8 up, 8 in 2026-03-10T08:40:22.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:21 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/1558637742' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:22.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:21 vm03 ceph-mon[57160]: from='client.24830 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:22.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:21 vm03 ceph-mon[57160]: pgmap v264: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 425 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:40:22.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:21 vm03 ceph-mon[50703]: osdmap e202: 8 total, 8 up, 8 in 2026-03-10T08:40:22.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:21 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/1558637742' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:22.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:21 vm03 ceph-mon[50703]: from='client.24830 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:22.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:21 vm03 ceph-mon[50703]: pgmap v264: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 425 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:40:22.709 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_write_ops PASSED [ 53%] 2026-03-10T08:40:22.839 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:40:22 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a[78511]: debug there is no tcmu-runner data available 2026-03-10T08:40:22.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:22 vm06 ceph-mon[54477]: from='client.24830 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:40:22.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:22 vm06 ceph-mon[54477]: osdmap e203: 8 total, 8 up, 8 in 2026-03-10T08:40:23.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:22 vm03 ceph-mon[57160]: from='client.24830 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:40:23.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:22 vm03 ceph-mon[57160]: osdmap e203: 8 total, 8 up, 8 in 2026-03-10T08:40:23.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:22 vm03 ceph-mon[50703]: from='client.24830 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:40:23.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:22 vm03 ceph-mon[50703]: osdmap e203: 8 total, 8 up, 8 in 2026-03-10T08:40:24.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:23 vm06 ceph-mon[54477]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:40:24.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:23 vm06 ceph-mon[54477]: osdmap e204: 8 total, 8 up, 8 in 2026-03-10T08:40:24.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:23 vm06 ceph-mon[54477]: pgmap v267: 164 pgs: 164 active+clean; 455 KiB data, 451 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:40:24.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:23 vm03 ceph-mon[57160]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:40:24.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:23 vm03 ceph-mon[57160]: osdmap e204: 8 total, 8 up, 8 in 2026-03-10T08:40:24.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:23 vm03 ceph-mon[57160]: pgmap v267: 164 pgs: 164 active+clean; 455 KiB data, 451 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:40:24.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:23 vm03 ceph-mon[50703]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:40:24.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:23 vm03 ceph-mon[50703]: osdmap e204: 8 total, 8 up, 8 in 2026-03-10T08:40:24.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:23 vm03 ceph-mon[50703]: pgmap v267: 164 pgs: 164 active+clean; 455 KiB data, 451 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:40:25.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:24 vm06 ceph-mon[54477]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:40:25.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:24 vm06 ceph-mon[54477]: osdmap e205: 8 total, 8 up, 8 in 2026-03-10T08:40:25.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:24 vm03 ceph-mon[57160]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:40:25.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:24 vm03 ceph-mon[57160]: osdmap e205: 8 total, 8 up, 8 in 2026-03-10T08:40:25.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:24 vm03 ceph-mon[50703]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:40:25.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:24 vm03 ceph-mon[50703]: osdmap e205: 8 total, 8 up, 8 in 2026-03-10T08:40:26.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:25 vm06 ceph-mon[54477]: osdmap e206: 8 total, 8 up, 8 in 2026-03-10T08:40:26.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:25 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/3657472614' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:26.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:25 vm06 ceph-mon[54477]: pgmap v270: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 451 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:40:26.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:25 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/3657472614' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:40:26.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:25 vm06 ceph-mon[54477]: osdmap e207: 8 total, 8 up, 8 in 2026-03-10T08:40:26.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:25 vm03 ceph-mon[57160]: osdmap e206: 8 total, 8 up, 8 in 2026-03-10T08:40:26.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:25 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/3657472614' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:26.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:25 vm03 ceph-mon[57160]: pgmap v270: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 451 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:40:26.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:25 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/3657472614' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:40:26.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:25 vm03 ceph-mon[57160]: osdmap e207: 8 total, 8 up, 8 in 2026-03-10T08:40:26.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:25 vm03 ceph-mon[50703]: osdmap e206: 8 total, 8 up, 8 in 2026-03-10T08:40:26.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:25 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/3657472614' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:26.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:25 vm03 ceph-mon[50703]: pgmap v270: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 451 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:40:26.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:25 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/3657472614' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:40:26.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:25 vm03 ceph-mon[50703]: osdmap e207: 8 total, 8 up, 8 in 2026-03-10T08:40:26.741 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_execute_op PASSED [ 54%] 2026-03-10T08:40:28.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:27 vm06 ceph-mon[54477]: osdmap e208: 8 total, 8 up, 8 in 2026-03-10T08:40:28.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:27 vm06 ceph-mon[54477]: pgmap v273: 164 pgs: 164 active+clean; 455 KiB data, 456 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:40:28.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:27 vm03 ceph-mon[57160]: osdmap e208: 8 total, 8 up, 8 in 2026-03-10T08:40:28.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:27 vm03 ceph-mon[57160]: pgmap v273: 164 pgs: 164 active+clean; 455 KiB data, 456 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:40:28.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:27 vm03 ceph-mon[50703]: osdmap e208: 8 total, 8 up, 8 in 2026-03-10T08:40:28.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:27 vm03 ceph-mon[50703]: pgmap v273: 164 pgs: 164 active+clean; 455 KiB data, 456 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:40:29.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:28 vm06 ceph-mon[54477]: osdmap e209: 8 total, 8 up, 8 in 2026-03-10T08:40:29.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:28 vm03 ceph-mon[57160]: osdmap e209: 8 total, 8 up, 8 in 2026-03-10T08:40:29.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:28 vm03 ceph-mon[50703]: osdmap e209: 8 total, 8 up, 8 in 2026-03-10T08:40:29.778 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:40:29 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: ::ffff:192.168.123.106 - - [10/Mar/2026:08:40:29] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T08:40:30.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:29 vm06 ceph-mon[54477]: osdmap e210: 8 total, 8 up, 8 in 2026-03-10T08:40:30.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:29 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/3875536661' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:30.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:29 vm06 ceph-mon[54477]: from='client.24842 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:30.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:29 vm06 ceph-mon[54477]: pgmap v276: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 456 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:40:30.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:29 vm03 ceph-mon[57160]: osdmap e210: 8 total, 8 up, 8 in 2026-03-10T08:40:30.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:29 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/3875536661' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:30.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:29 vm03 ceph-mon[57160]: from='client.24842 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:30.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:29 vm03 ceph-mon[57160]: pgmap v276: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 456 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:40:30.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:29 vm03 ceph-mon[50703]: osdmap e210: 8 total, 8 up, 8 in 2026-03-10T08:40:30.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:29 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/3875536661' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:30.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:29 vm03 ceph-mon[50703]: from='client.24842 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:30.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:29 vm03 ceph-mon[50703]: pgmap v276: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 456 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:40:30.772 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_writesame_op PASSED [ 56%] 2026-03-10T08:40:31.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:30 vm06 ceph-mon[54477]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:40:31.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:30 vm06 ceph-mon[54477]: from='client.24842 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:40:31.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:30 vm06 ceph-mon[54477]: osdmap e211: 8 total, 8 up, 8 in 2026-03-10T08:40:31.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:30 vm06 ceph-mon[54477]: osdmap e212: 8 total, 8 up, 8 in 2026-03-10T08:40:31.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:30 vm03 ceph-mon[57160]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:40:31.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:30 vm03 ceph-mon[57160]: from='client.24842 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:40:31.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:30 vm03 ceph-mon[57160]: osdmap e211: 8 total, 8 up, 8 in 2026-03-10T08:40:31.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:30 vm03 ceph-mon[57160]: osdmap e212: 8 total, 8 up, 8 in 2026-03-10T08:40:31.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:30 vm03 ceph-mon[50703]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:40:31.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:30 vm03 ceph-mon[50703]: from='client.24842 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:40:31.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:30 vm03 ceph-mon[50703]: osdmap e211: 8 total, 8 up, 8 in 2026-03-10T08:40:31.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:30 vm03 ceph-mon[50703]: osdmap e212: 8 total, 8 up, 8 in 2026-03-10T08:40:32.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:31 vm06 ceph-mon[54477]: pgmap v279: 164 pgs: 164 active+clean; 455 KiB data, 456 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:40:32.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:31 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:40:32.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:31 vm03 ceph-mon[57160]: pgmap v279: 164 pgs: 164 active+clean; 455 KiB data, 456 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:40:32.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:31 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:40:32.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:31 vm03 ceph-mon[50703]: pgmap v279: 164 pgs: 164 active+clean; 455 KiB data, 456 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:40:32.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:31 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:40:32.834 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:40:32 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a[78511]: debug there is no tcmu-runner data available 2026-03-10T08:40:33.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:32 vm06 ceph-mon[54477]: osdmap e213: 8 total, 8 up, 8 in 2026-03-10T08:40:33.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:32 vm03 ceph-mon[57160]: osdmap e213: 8 total, 8 up, 8 in 2026-03-10T08:40:33.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:32 vm03 ceph-mon[50703]: osdmap e213: 8 total, 8 up, 8 in 2026-03-10T08:40:34.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:33 vm03 ceph-mon[57160]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:40:34.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:33 vm03 ceph-mon[57160]: osdmap e214: 8 total, 8 up, 8 in 2026-03-10T08:40:34.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:33 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/98700627' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:34.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:33 vm03 ceph-mon[57160]: from='client.24845 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:34.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:33 vm03 ceph-mon[57160]: pgmap v282: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 456 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:40:34.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:33 vm03 ceph-mon[57160]: from='client.24845 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:40:34.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:33 vm03 ceph-mon[57160]: osdmap e215: 8 total, 8 up, 8 in 2026-03-10T08:40:34.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:33 vm03 ceph-mon[50703]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:40:34.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:33 vm03 ceph-mon[50703]: osdmap e214: 8 total, 8 up, 8 in 2026-03-10T08:40:34.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:33 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/98700627' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:34.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:33 vm03 ceph-mon[50703]: from='client.24845 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:34.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:33 vm03 ceph-mon[50703]: pgmap v282: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 456 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:40:34.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:33 vm03 ceph-mon[50703]: from='client.24845 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:40:34.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:33 vm03 ceph-mon[50703]: osdmap e215: 8 total, 8 up, 8 in 2026-03-10T08:40:34.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:33 vm06 ceph-mon[54477]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:40:34.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:33 vm06 ceph-mon[54477]: osdmap e214: 8 total, 8 up, 8 in 2026-03-10T08:40:34.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:33 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/98700627' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:34.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:33 vm06 ceph-mon[54477]: from='client.24845 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:34.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:33 vm06 ceph-mon[54477]: pgmap v282: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 456 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:40:34.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:33 vm06 ceph-mon[54477]: from='client.24845 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:40:34.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:33 vm06 ceph-mon[54477]: osdmap e215: 8 total, 8 up, 8 in 2026-03-10T08:40:34.824 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_get_omap_vals_by_keys PASSED [ 57%] 2026-03-10T08:40:36.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:35 vm06 ceph-mon[54477]: osdmap e216: 8 total, 8 up, 8 in 2026-03-10T08:40:36.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:35 vm06 ceph-mon[54477]: pgmap v285: 164 pgs: 164 active+clean; 455 KiB data, 456 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:40:36.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:35 vm03 ceph-mon[57160]: osdmap e216: 8 total, 8 up, 8 in 2026-03-10T08:40:36.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:35 vm03 ceph-mon[57160]: pgmap v285: 164 pgs: 164 active+clean; 455 KiB data, 456 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:40:36.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:35 vm03 ceph-mon[50703]: osdmap e216: 8 total, 8 up, 8 in 2026-03-10T08:40:36.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:35 vm03 ceph-mon[50703]: pgmap v285: 164 pgs: 164 active+clean; 455 KiB data, 456 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:40:37.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:36 vm06 ceph-mon[54477]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:40:37.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:36 vm06 ceph-mon[54477]: osdmap e217: 8 total, 8 up, 8 in 2026-03-10T08:40:37.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:36 vm03 ceph-mon[50703]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:40:37.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:36 vm03 ceph-mon[50703]: osdmap e217: 8 total, 8 up, 8 in 2026-03-10T08:40:37.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:36 vm03 ceph-mon[57160]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:40:37.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:36 vm03 ceph-mon[57160]: osdmap e217: 8 total, 8 up, 8 in 2026-03-10T08:40:38.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:37 vm03 ceph-mon[57160]: osdmap e218: 8 total, 8 up, 8 in 2026-03-10T08:40:38.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:37 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/3636988080' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:38.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:37 vm03 ceph-mon[57160]: pgmap v288: 196 pgs: 196 active+clean; 455 KiB data, 461 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T08:40:38.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:37 vm03 ceph-mon[50703]: osdmap e218: 8 total, 8 up, 8 in 2026-03-10T08:40:38.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:37 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/3636988080' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:38.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:37 vm03 ceph-mon[50703]: pgmap v288: 196 pgs: 196 active+clean; 455 KiB data, 461 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T08:40:38.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:37 vm06 ceph-mon[54477]: osdmap e218: 8 total, 8 up, 8 in 2026-03-10T08:40:38.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:37 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/3636988080' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:38.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:37 vm06 ceph-mon[54477]: pgmap v288: 196 pgs: 196 active+clean; 455 KiB data, 461 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T08:40:38.876 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_get_omap_keys PASSED [ 58%] 2026-03-10T08:40:39.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:38 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/3636988080' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:40:39.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:38 vm03 ceph-mon[57160]: osdmap e219: 8 total, 8 up, 8 in 2026-03-10T08:40:39.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:38 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/3636988080' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:40:39.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:38 vm03 ceph-mon[50703]: osdmap e219: 8 total, 8 up, 8 in 2026-03-10T08:40:39.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:38 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/3636988080' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:40:39.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:38 vm06 ceph-mon[54477]: osdmap e219: 8 total, 8 up, 8 in 2026-03-10T08:40:39.917 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:40:39 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: ::ffff:192.168.123.106 - - [10/Mar/2026:08:40:39] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T08:40:40.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:39 vm03 ceph-mon[57160]: osdmap e220: 8 total, 8 up, 8 in 2026-03-10T08:40:40.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:39 vm03 ceph-mon[57160]: pgmap v291: 164 pgs: 164 active+clean; 455 KiB data, 461 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:40:40.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:39 vm03 ceph-mon[50703]: osdmap e220: 8 total, 8 up, 8 in 2026-03-10T08:40:40.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:39 vm03 ceph-mon[50703]: pgmap v291: 164 pgs: 164 active+clean; 455 KiB data, 461 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:40:40.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:39 vm06 ceph-mon[54477]: osdmap e220: 8 total, 8 up, 8 in 2026-03-10T08:40:40.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:39 vm06 ceph-mon[54477]: pgmap v291: 164 pgs: 164 active+clean; 455 KiB data, 461 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:40:41.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:40 vm06 ceph-mon[54477]: osdmap e221: 8 total, 8 up, 8 in 2026-03-10T08:40:41.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:40 vm03 ceph-mon[57160]: osdmap e221: 8 total, 8 up, 8 in 2026-03-10T08:40:41.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:40 vm03 ceph-mon[50703]: osdmap e221: 8 total, 8 up, 8 in 2026-03-10T08:40:42.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:41 vm06 ceph-mon[54477]: osdmap e222: 8 total, 8 up, 8 in 2026-03-10T08:40:42.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:41 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/555551591' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:42.340 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:41 vm06 ceph-mon[54477]: from='client.24854 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:42.340 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:41 vm06 ceph-mon[54477]: pgmap v294: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 461 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:40:42.340 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:41 vm06 ceph-mon[54477]: from='client.24854 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:40:42.340 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:41 vm06 ceph-mon[54477]: osdmap e223: 8 total, 8 up, 8 in 2026-03-10T08:40:42.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:41 vm03 ceph-mon[57160]: osdmap e222: 8 total, 8 up, 8 in 2026-03-10T08:40:42.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:41 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/555551591' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:42.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:41 vm03 ceph-mon[57160]: from='client.24854 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:42.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:41 vm03 ceph-mon[57160]: pgmap v294: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 461 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:40:42.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:41 vm03 ceph-mon[57160]: from='client.24854 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:40:42.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:41 vm03 ceph-mon[57160]: osdmap e223: 8 total, 8 up, 8 in 2026-03-10T08:40:42.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:41 vm03 ceph-mon[50703]: osdmap e222: 8 total, 8 up, 8 in 2026-03-10T08:40:42.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:41 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/555551591' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:42.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:41 vm03 ceph-mon[50703]: from='client.24854 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:42.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:41 vm03 ceph-mon[50703]: pgmap v294: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 461 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:40:42.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:41 vm03 ceph-mon[50703]: from='client.24854 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:40:42.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:41 vm03 ceph-mon[50703]: osdmap e223: 8 total, 8 up, 8 in 2026-03-10T08:40:42.839 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:40:42 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a[78511]: debug there is no tcmu-runner data available 2026-03-10T08:40:42.958 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_clear_omap PASSED [ 59%] 2026-03-10T08:40:43.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:42 vm06 ceph-mon[54477]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:40:43.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:42 vm03 ceph-mon[57160]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:40:43.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:42 vm03 ceph-mon[50703]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:40:44.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:44 vm06 ceph-mon[54477]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:40:44.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:44 vm06 ceph-mon[54477]: osdmap e224: 8 total, 8 up, 8 in 2026-03-10T08:40:44.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:44 vm06 ceph-mon[54477]: pgmap v297: 164 pgs: 164 active+clean; 455 KiB data, 461 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:40:44.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:44 vm03 ceph-mon[57160]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:40:44.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:44 vm03 ceph-mon[57160]: osdmap e224: 8 total, 8 up, 8 in 2026-03-10T08:40:44.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:44 vm03 ceph-mon[57160]: pgmap v297: 164 pgs: 164 active+clean; 455 KiB data, 461 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:40:44.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:44 vm03 ceph-mon[50703]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:40:44.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:44 vm03 ceph-mon[50703]: osdmap e224: 8 total, 8 up, 8 in 2026-03-10T08:40:44.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:44 vm03 ceph-mon[50703]: pgmap v297: 164 pgs: 164 active+clean; 455 KiB data, 461 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:40:45.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:45 vm06 ceph-mon[54477]: osdmap e225: 8 total, 8 up, 8 in 2026-03-10T08:40:45.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:45 vm03 ceph-mon[57160]: osdmap e225: 8 total, 8 up, 8 in 2026-03-10T08:40:45.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:45 vm03 ceph-mon[50703]: osdmap e225: 8 total, 8 up, 8 in 2026-03-10T08:40:46.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:46 vm06 ceph-mon[54477]: osdmap e226: 8 total, 8 up, 8 in 2026-03-10T08:40:46.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:46 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/2179440193' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:46.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:46 vm06 ceph-mon[54477]: pgmap v300: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 461 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:40:46.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:46 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/2179440193' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:40:46.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:46 vm06 ceph-mon[54477]: osdmap e227: 8 total, 8 up, 8 in 2026-03-10T08:40:46.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:46 vm03 ceph-mon[57160]: osdmap e226: 8 total, 8 up, 8 in 2026-03-10T08:40:46.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:46 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/2179440193' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:46.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:46 vm03 ceph-mon[57160]: pgmap v300: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 461 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:40:46.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:46 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/2179440193' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:40:46.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:46 vm03 ceph-mon[57160]: osdmap e227: 8 total, 8 up, 8 in 2026-03-10T08:40:46.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:46 vm03 ceph-mon[50703]: osdmap e226: 8 total, 8 up, 8 in 2026-03-10T08:40:46.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:46 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/2179440193' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:46.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:46 vm03 ceph-mon[50703]: pgmap v300: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 461 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:40:46.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:46 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/2179440193' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:40:46.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:46 vm03 ceph-mon[50703]: osdmap e227: 8 total, 8 up, 8 in 2026-03-10T08:40:47.029 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_remove_omap_range2 PASSED [ 60%] 2026-03-10T08:40:47.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:47 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:40:47.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:47 vm06 ceph-mon[54477]: osdmap e228: 8 total, 8 up, 8 in 2026-03-10T08:40:47.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:47 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:40:47.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:47 vm03 ceph-mon[57160]: osdmap e228: 8 total, 8 up, 8 in 2026-03-10T08:40:47.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:47 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:40:47.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:47 vm03 ceph-mon[50703]: osdmap e228: 8 total, 8 up, 8 in 2026-03-10T08:40:48.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:48 vm06 ceph-mon[54477]: pgmap v303: 164 pgs: 164 active+clean; 455 KiB data, 470 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:40:48.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:48 vm06 ceph-mon[54477]: osdmap e229: 8 total, 8 up, 8 in 2026-03-10T08:40:48.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:48 vm03 ceph-mon[57160]: pgmap v303: 164 pgs: 164 active+clean; 455 KiB data, 470 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:40:48.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:48 vm03 ceph-mon[57160]: osdmap e229: 8 total, 8 up, 8 in 2026-03-10T08:40:48.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:48 vm03 ceph-mon[50703]: pgmap v303: 164 pgs: 164 active+clean; 455 KiB data, 470 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:40:48.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:48 vm03 ceph-mon[50703]: osdmap e229: 8 total, 8 up, 8 in 2026-03-10T08:40:49.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:49 vm03 ceph-mon[57160]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:40:49.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:49 vm03 ceph-mon[57160]: osdmap e230: 8 total, 8 up, 8 in 2026-03-10T08:40:49.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:49 vm03 ceph-mon[50703]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:40:49.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:49 vm03 ceph-mon[50703]: osdmap e230: 8 total, 8 up, 8 in 2026-03-10T08:40:49.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:49 vm06 ceph-mon[54477]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:40:49.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:49 vm06 ceph-mon[54477]: osdmap e230: 8 total, 8 up, 8 in 2026-03-10T08:40:49.928 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:40:49 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: ::ffff:192.168.123.106 - - [10/Mar/2026:08:40:49] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T08:40:50.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:50 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/1669243099' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:50.436 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:50 vm03 ceph-mon[57160]: from='client.24863 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:50.436 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:50 vm03 ceph-mon[57160]: pgmap v306: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 470 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:40:50.436 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:50 vm03 ceph-mon[57160]: from='client.24863 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:40:50.436 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:50 vm03 ceph-mon[57160]: osdmap e231: 8 total, 8 up, 8 in 2026-03-10T08:40:50.436 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:50 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/1669243099' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:50.436 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:50 vm03 ceph-mon[50703]: from='client.24863 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:50.436 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:50 vm03 ceph-mon[50703]: pgmap v306: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 470 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:40:50.436 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:50 vm03 ceph-mon[50703]: from='client.24863 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:40:50.436 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:50 vm03 ceph-mon[50703]: osdmap e231: 8 total, 8 up, 8 in 2026-03-10T08:40:50.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:50 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/1669243099' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:50.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:50 vm06 ceph-mon[54477]: from='client.24863 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:50.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:50 vm06 ceph-mon[54477]: pgmap v306: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 470 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:40:50.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:50 vm06 ceph-mon[54477]: from='client.24863 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:40:50.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:50 vm06 ceph-mon[54477]: osdmap e231: 8 total, 8 up, 8 in 2026-03-10T08:40:51.083 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_omap_cmp PASSED [ 61%] 2026-03-10T08:40:52.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:52 vm03 ceph-mon[57160]: osdmap e232: 8 total, 8 up, 8 in 2026-03-10T08:40:52.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:52 vm03 ceph-mon[57160]: pgmap v309: 164 pgs: 164 active+clean; 455 KiB data, 470 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:40:52.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:52 vm03 ceph-mon[50703]: osdmap e232: 8 total, 8 up, 8 in 2026-03-10T08:40:52.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:52 vm03 ceph-mon[50703]: pgmap v309: 164 pgs: 164 active+clean; 455 KiB data, 470 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:40:52.500 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:52 vm06 ceph-mon[54477]: osdmap e232: 8 total, 8 up, 8 in 2026-03-10T08:40:52.500 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:52 vm06 ceph-mon[54477]: pgmap v309: 164 pgs: 164 active+clean; 455 KiB data, 470 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:40:52.839 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:40:52 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a[78511]: debug there is no tcmu-runner data available 2026-03-10T08:40:53.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:53 vm03 ceph-mon[57160]: osdmap e233: 8 total, 8 up, 8 in 2026-03-10T08:40:53.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:53 vm03 ceph-mon[50703]: osdmap e233: 8 total, 8 up, 8 in 2026-03-10T08:40:53.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:53 vm06 ceph-mon[54477]: osdmap e233: 8 total, 8 up, 8 in 2026-03-10T08:40:54.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:54 vm03 ceph-mon[57160]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:40:54.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:54 vm03 ceph-mon[57160]: osdmap e234: 8 total, 8 up, 8 in 2026-03-10T08:40:54.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:54 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/2980462060' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:54.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:54 vm03 ceph-mon[57160]: pgmap v312: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 471 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:40:54.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:54 vm03 ceph-mon[50703]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:40:54.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:54 vm03 ceph-mon[50703]: osdmap e234: 8 total, 8 up, 8 in 2026-03-10T08:40:54.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:54 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/2980462060' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:54.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:54 vm03 ceph-mon[50703]: pgmap v312: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 471 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:40:54.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:54 vm06 ceph-mon[54477]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:40:54.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:54 vm06 ceph-mon[54477]: osdmap e234: 8 total, 8 up, 8 in 2026-03-10T08:40:54.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:54 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/2980462060' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:54.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:54 vm06 ceph-mon[54477]: pgmap v312: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 471 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:40:55.121 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_cmpext_op PASSED [ 62%] 2026-03-10T08:40:55.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:55 vm03 ceph-mon[57160]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:40:55.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:55 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/2980462060' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:40:55.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:55 vm03 ceph-mon[57160]: osdmap e235: 8 total, 8 up, 8 in 2026-03-10T08:40:55.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:55 vm03 ceph-mon[50703]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:40:55.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:55 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/2980462060' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:40:55.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:55 vm03 ceph-mon[50703]: osdmap e235: 8 total, 8 up, 8 in 2026-03-10T08:40:55.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:55 vm06 ceph-mon[54477]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:40:55.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:55 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/2980462060' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:40:55.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:55 vm06 ceph-mon[54477]: osdmap e235: 8 total, 8 up, 8 in 2026-03-10T08:40:56.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:56 vm03 ceph-mon[57160]: osdmap e236: 8 total, 8 up, 8 in 2026-03-10T08:40:56.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:56 vm03 ceph-mon[57160]: pgmap v315: 164 pgs: 164 active+clean; 455 KiB data, 471 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:40:56.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:56 vm03 ceph-mon[50703]: osdmap e236: 8 total, 8 up, 8 in 2026-03-10T08:40:56.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:56 vm03 ceph-mon[50703]: pgmap v315: 164 pgs: 164 active+clean; 455 KiB data, 471 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:40:56.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:56 vm06 ceph-mon[54477]: osdmap e236: 8 total, 8 up, 8 in 2026-03-10T08:40:56.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:56 vm06 ceph-mon[54477]: pgmap v315: 164 pgs: 164 active+clean; 455 KiB data, 471 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:40:57.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:57 vm03 ceph-mon[57160]: osdmap e237: 8 total, 8 up, 8 in 2026-03-10T08:40:57.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:57 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:40:57.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:57 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:40:57.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:57 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:40:57.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:57 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:40:57.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:57 vm03 ceph-mon[50703]: osdmap e237: 8 total, 8 up, 8 in 2026-03-10T08:40:57.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:57 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:40:57.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:57 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:40:57.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:57 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:40:57.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:57 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:40:57.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:57 vm06 ceph-mon[54477]: osdmap e237: 8 total, 8 up, 8 in 2026-03-10T08:40:57.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:57 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:40:57.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:57 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:40:57.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:57 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:40:57.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:57 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:40:58.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:58 vm03 ceph-mon[57160]: osdmap e238: 8 total, 8 up, 8 in 2026-03-10T08:40:58.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:58 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/280875722' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:58.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:58 vm03 ceph-mon[57160]: pgmap v318: 196 pgs: 196 active+clean; 455 KiB data, 471 MiB used, 160 GiB / 160 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 5 op/s 2026-03-10T08:40:58.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:58 vm03 ceph-mon[50703]: osdmap e238: 8 total, 8 up, 8 in 2026-03-10T08:40:58.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:58 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/280875722' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:58.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:58 vm03 ceph-mon[50703]: pgmap v318: 196 pgs: 196 active+clean; 455 KiB data, 471 MiB used, 160 GiB / 160 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 5 op/s 2026-03-10T08:40:58.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:58 vm06 ceph-mon[54477]: osdmap e238: 8 total, 8 up, 8 in 2026-03-10T08:40:58.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:58 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/280875722' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:40:58.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:58 vm06 ceph-mon[54477]: pgmap v318: 196 pgs: 196 active+clean; 455 KiB data, 471 MiB used, 160 GiB / 160 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 5 op/s 2026-03-10T08:40:59.161 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_xattrs_op PASSED [ 63%] 2026-03-10T08:40:59.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:59 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/280875722' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:40:59.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:40:59 vm03 ceph-mon[57160]: osdmap e239: 8 total, 8 up, 8 in 2026-03-10T08:40:59.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:59 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/280875722' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:40:59.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:40:59 vm03 ceph-mon[50703]: osdmap e239: 8 total, 8 up, 8 in 2026-03-10T08:40:59.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:59 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/280875722' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:40:59.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:40:59 vm06 ceph-mon[54477]: osdmap e239: 8 total, 8 up, 8 in 2026-03-10T08:40:59.928 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:40:59 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: ::ffff:192.168.123.106 - - [10/Mar/2026:08:40:59] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T08:41:00.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:00 vm06 ceph-mon[54477]: osdmap e240: 8 total, 8 up, 8 in 2026-03-10T08:41:00.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:00 vm06 ceph-mon[54477]: pgmap v321: 164 pgs: 164 active+clean; 455 KiB data, 471 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:41:00.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:00 vm03 ceph-mon[57160]: osdmap e240: 8 total, 8 up, 8 in 2026-03-10T08:41:00.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:00 vm03 ceph-mon[57160]: pgmap v321: 164 pgs: 164 active+clean; 455 KiB data, 471 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:41:00.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:00 vm03 ceph-mon[50703]: osdmap e240: 8 total, 8 up, 8 in 2026-03-10T08:41:00.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:00 vm03 ceph-mon[50703]: pgmap v321: 164 pgs: 164 active+clean; 455 KiB data, 471 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:41:01.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:01 vm06 ceph-mon[54477]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:41:01.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:01 vm06 ceph-mon[54477]: osdmap e241: 8 total, 8 up, 8 in 2026-03-10T08:41:01.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:01 vm03 ceph-mon[57160]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:41:01.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:01 vm03 ceph-mon[57160]: osdmap e241: 8 total, 8 up, 8 in 2026-03-10T08:41:01.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:01 vm03 ceph-mon[50703]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:41:01.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:01 vm03 ceph-mon[50703]: osdmap e241: 8 total, 8 up, 8 in 2026-03-10T08:41:02.511 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:02 vm06 ceph-mon[54477]: osdmap e242: 8 total, 8 up, 8 in 2026-03-10T08:41:02.511 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:02 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/1240820847' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:41:02.511 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:02 vm06 ceph-mon[54477]: from='client.24875 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:41:02.511 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:02 vm06 ceph-mon[54477]: pgmap v324: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 471 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:41:02.511 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:02 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:41:02.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:02 vm03 ceph-mon[57160]: osdmap e242: 8 total, 8 up, 8 in 2026-03-10T08:41:02.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:02 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/1240820847' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:41:02.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:02 vm03 ceph-mon[57160]: from='client.24875 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:41:02.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:02 vm03 ceph-mon[57160]: pgmap v324: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 471 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:41:02.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:02 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:41:02.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:02 vm03 ceph-mon[50703]: osdmap e242: 8 total, 8 up, 8 in 2026-03-10T08:41:02.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:02 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/1240820847' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:41:02.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:02 vm03 ceph-mon[50703]: from='client.24875 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:41:02.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:02 vm03 ceph-mon[50703]: pgmap v324: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 471 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:41:02.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:02 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:41:02.839 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:41:02 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a[78511]: debug there is no tcmu-runner data available 2026-03-10T08:41:03.232 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_locator PASSED [ 64%] 2026-03-10T08:41:03.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:03 vm06 ceph-mon[54477]: from='client.24875 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:41:03.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:03 vm06 ceph-mon[54477]: osdmap e243: 8 total, 8 up, 8 in 2026-03-10T08:41:03.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:03 vm03 ceph-mon[57160]: from='client.24875 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:41:03.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:03 vm03 ceph-mon[57160]: osdmap e243: 8 total, 8 up, 8 in 2026-03-10T08:41:03.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:03 vm03 ceph-mon[50703]: from='client.24875 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:41:03.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:03 vm03 ceph-mon[50703]: osdmap e243: 8 total, 8 up, 8 in 2026-03-10T08:41:04.579 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:04 vm03 ceph-mon[57160]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:41:04.579 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:04 vm03 ceph-mon[57160]: osdmap e244: 8 total, 8 up, 8 in 2026-03-10T08:41:04.579 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:04 vm03 ceph-mon[57160]: pgmap v327: 164 pgs: 164 active+clean; 455 KiB data, 472 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:41:04.579 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:04 vm03 ceph-mon[50703]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:41:04.579 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:04 vm03 ceph-mon[50703]: osdmap e244: 8 total, 8 up, 8 in 2026-03-10T08:41:04.579 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:04 vm03 ceph-mon[50703]: pgmap v327: 164 pgs: 164 active+clean; 455 KiB data, 472 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:41:04.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:04 vm06 ceph-mon[54477]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:41:04.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:04 vm06 ceph-mon[54477]: osdmap e244: 8 total, 8 up, 8 in 2026-03-10T08:41:04.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:04 vm06 ceph-mon[54477]: pgmap v327: 164 pgs: 164 active+clean; 455 KiB data, 472 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:41:05.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:05 vm06 ceph-mon[54477]: osdmap e245: 8 total, 8 up, 8 in 2026-03-10T08:41:05.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:05 vm03 ceph-mon[57160]: osdmap e245: 8 total, 8 up, 8 in 2026-03-10T08:41:05.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:05 vm03 ceph-mon[50703]: osdmap e245: 8 total, 8 up, 8 in 2026-03-10T08:41:06.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:06 vm06 ceph-mon[54477]: osdmap e246: 8 total, 8 up, 8 in 2026-03-10T08:41:06.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:06 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/417289453' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:41:06.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:06 vm06 ceph-mon[54477]: pgmap v330: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 472 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:41:06.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:06 vm03 ceph-mon[57160]: osdmap e246: 8 total, 8 up, 8 in 2026-03-10T08:41:06.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:06 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/417289453' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:41:06.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:06 vm03 ceph-mon[57160]: pgmap v330: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 472 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:41:06.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:06 vm03 ceph-mon[50703]: osdmap e246: 8 total, 8 up, 8 in 2026-03-10T08:41:06.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:06 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/417289453' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:41:06.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:06 vm03 ceph-mon[50703]: pgmap v330: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 472 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:41:07.297 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_operate_aio_write_op PASSED [ 65%] 2026-03-10T08:41:07.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:07 vm06 ceph-mon[54477]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:41:07.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:07 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/417289453' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:41:07.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:07 vm06 ceph-mon[54477]: osdmap e247: 8 total, 8 up, 8 in 2026-03-10T08:41:07.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:07 vm03 ceph-mon[57160]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:41:07.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:07 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/417289453' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:41:07.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:07 vm03 ceph-mon[57160]: osdmap e247: 8 total, 8 up, 8 in 2026-03-10T08:41:07.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:07 vm03 ceph-mon[50703]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:41:07.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:07 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/417289453' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:41:07.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:07 vm03 ceph-mon[50703]: osdmap e247: 8 total, 8 up, 8 in 2026-03-10T08:41:08.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:08 vm06 ceph-mon[54477]: osdmap e248: 8 total, 8 up, 8 in 2026-03-10T08:41:08.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:08 vm06 ceph-mon[54477]: pgmap v333: 164 pgs: 164 active+clean; 455 KiB data, 476 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:41:08.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:08 vm03 ceph-mon[57160]: osdmap e248: 8 total, 8 up, 8 in 2026-03-10T08:41:08.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:08 vm03 ceph-mon[57160]: pgmap v333: 164 pgs: 164 active+clean; 455 KiB data, 476 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:41:08.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:08 vm03 ceph-mon[50703]: osdmap e248: 8 total, 8 up, 8 in 2026-03-10T08:41:08.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:08 vm03 ceph-mon[50703]: pgmap v333: 164 pgs: 164 active+clean; 455 KiB data, 476 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:41:09.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:09 vm06 ceph-mon[54477]: osdmap e249: 8 total, 8 up, 8 in 2026-03-10T08:41:09.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:09 vm03 ceph-mon[57160]: osdmap e249: 8 total, 8 up, 8 in 2026-03-10T08:41:09.678 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:41:09 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: ::ffff:192.168.123.106 - - [10/Mar/2026:08:41:09] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T08:41:09.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:09 vm03 ceph-mon[50703]: osdmap e249: 8 total, 8 up, 8 in 2026-03-10T08:41:10.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:10 vm06 ceph-mon[54477]: osdmap e250: 8 total, 8 up, 8 in 2026-03-10T08:41:10.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:10 vm06 ceph-mon[54477]: pgmap v336: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 476 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:41:10.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:10 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/2691222721' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:41:10.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:10 vm06 ceph-mon[54477]: from='client.24898 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:41:10.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:10 vm06 ceph-mon[54477]: from='client.24898 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:41:10.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:10 vm06 ceph-mon[54477]: osdmap e251: 8 total, 8 up, 8 in 2026-03-10T08:41:10.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:10 vm03 ceph-mon[57160]: osdmap e250: 8 total, 8 up, 8 in 2026-03-10T08:41:10.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:10 vm03 ceph-mon[57160]: pgmap v336: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 476 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:41:10.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:10 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/2691222721' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:41:10.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:10 vm03 ceph-mon[57160]: from='client.24898 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:41:10.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:10 vm03 ceph-mon[57160]: from='client.24898 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:41:10.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:10 vm03 ceph-mon[57160]: osdmap e251: 8 total, 8 up, 8 in 2026-03-10T08:41:10.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:10 vm03 ceph-mon[50703]: osdmap e250: 8 total, 8 up, 8 in 2026-03-10T08:41:10.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:10 vm03 ceph-mon[50703]: pgmap v336: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 476 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:41:10.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:10 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/2691222721' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:41:10.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:10 vm03 ceph-mon[50703]: from='client.24898 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:41:10.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:10 vm03 ceph-mon[50703]: from='client.24898 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:41:10.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:10 vm03 ceph-mon[50703]: osdmap e251: 8 total, 8 up, 8 in 2026-03-10T08:41:11.398 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_write PASSED [ 67%] 2026-03-10T08:41:12.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:12 vm03 ceph-mon[57160]: pgmap v338: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 476 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:41:12.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:12 vm03 ceph-mon[57160]: osdmap e252: 8 total, 8 up, 8 in 2026-03-10T08:41:12.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:12 vm03 ceph-mon[50703]: pgmap v338: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 476 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:41:12.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:12 vm03 ceph-mon[50703]: osdmap e252: 8 total, 8 up, 8 in 2026-03-10T08:41:12.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:12 vm06 ceph-mon[54477]: pgmap v338: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 476 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:41:12.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:12 vm06 ceph-mon[54477]: osdmap e252: 8 total, 8 up, 8 in 2026-03-10T08:41:12.839 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:41:12 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a[78511]: debug there is no tcmu-runner data available 2026-03-10T08:41:13.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:13 vm03 ceph-mon[57160]: osdmap e253: 8 total, 8 up, 8 in 2026-03-10T08:41:13.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:13 vm03 ceph-mon[50703]: osdmap e253: 8 total, 8 up, 8 in 2026-03-10T08:41:13.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:13 vm06 ceph-mon[54477]: osdmap e253: 8 total, 8 up, 8 in 2026-03-10T08:41:14.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:14 vm03 ceph-mon[57160]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:41:14.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:14 vm03 ceph-mon[57160]: pgmap v341: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 477 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:41:14.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:14 vm03 ceph-mon[57160]: osdmap e254: 8 total, 8 up, 8 in 2026-03-10T08:41:14.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:14 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/3822329633' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:41:14.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:14 vm03 ceph-mon[50703]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:41:14.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:14 vm03 ceph-mon[50703]: pgmap v341: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 477 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:41:14.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:14 vm03 ceph-mon[50703]: osdmap e254: 8 total, 8 up, 8 in 2026-03-10T08:41:14.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:14 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/3822329633' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:41:14.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:14 vm06 ceph-mon[54477]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:41:14.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:14 vm06 ceph-mon[54477]: pgmap v341: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 477 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:41:14.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:14 vm06 ceph-mon[54477]: osdmap e254: 8 total, 8 up, 8 in 2026-03-10T08:41:14.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:14 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/3822329633' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:41:15.441 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_cmpext PASSED [ 68%] 2026-03-10T08:41:15.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:15 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/3822329633' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:41:15.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:15 vm06 ceph-mon[54477]: osdmap e255: 8 total, 8 up, 8 in 2026-03-10T08:41:15.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:15 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/3822329633' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:41:15.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:15 vm03 ceph-mon[57160]: osdmap e255: 8 total, 8 up, 8 in 2026-03-10T08:41:15.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:15 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/3822329633' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:41:15.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:15 vm03 ceph-mon[50703]: osdmap e255: 8 total, 8 up, 8 in 2026-03-10T08:41:16.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:16 vm06 ceph-mon[54477]: pgmap v344: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 477 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:41:16.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:16 vm06 ceph-mon[54477]: osdmap e256: 8 total, 8 up, 8 in 2026-03-10T08:41:16.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:16 vm03 ceph-mon[57160]: pgmap v344: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 477 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:41:16.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:16 vm03 ceph-mon[57160]: osdmap e256: 8 total, 8 up, 8 in 2026-03-10T08:41:16.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:16 vm03 ceph-mon[50703]: pgmap v344: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 477 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:41:16.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:16 vm03 ceph-mon[50703]: osdmap e256: 8 total, 8 up, 8 in 2026-03-10T08:41:17.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:17 vm06 ceph-mon[54477]: osdmap e257: 8 total, 8 up, 8 in 2026-03-10T08:41:17.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:17 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:41:17.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:17 vm06 ceph-mon[54477]: osdmap e258: 8 total, 8 up, 8 in 2026-03-10T08:41:17.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:17 vm03 ceph-mon[57160]: osdmap e257: 8 total, 8 up, 8 in 2026-03-10T08:41:17.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:17 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:41:17.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:17 vm03 ceph-mon[57160]: osdmap e258: 8 total, 8 up, 8 in 2026-03-10T08:41:17.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:17 vm03 ceph-mon[50703]: osdmap e257: 8 total, 8 up, 8 in 2026-03-10T08:41:17.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:17 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:41:17.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:17 vm03 ceph-mon[50703]: osdmap e258: 8 total, 8 up, 8 in 2026-03-10T08:41:18.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:18 vm06 ceph-mon[54477]: pgmap v347: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 477 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:41:18.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:18 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/634116097' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:41:18.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:18 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/634116097' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:41:18.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:18 vm06 ceph-mon[54477]: osdmap e259: 8 total, 8 up, 8 in 2026-03-10T08:41:18.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:18 vm03 ceph-mon[57160]: pgmap v347: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 477 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:41:18.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:18 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/634116097' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:41:18.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:18 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/634116097' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:41:18.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:18 vm03 ceph-mon[57160]: osdmap e259: 8 total, 8 up, 8 in 2026-03-10T08:41:18.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:18 vm03 ceph-mon[50703]: pgmap v347: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 477 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:41:18.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:18 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/634116097' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:41:18.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:18 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/634116097' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:41:18.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:18 vm03 ceph-mon[50703]: osdmap e259: 8 total, 8 up, 8 in 2026-03-10T08:41:19.481 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_rmxattr PASSED [ 69%] 2026-03-10T08:41:19.928 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:41:19 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: ::ffff:192.168.123.106 - - [10/Mar/2026:08:41:19] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T08:41:20.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:20 vm06 ceph-mon[54477]: pgmap v350: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 477 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:41:20.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:20 vm06 ceph-mon[54477]: osdmap e260: 8 total, 8 up, 8 in 2026-03-10T08:41:20.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:20 vm03 ceph-mon[57160]: pgmap v350: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 477 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:41:20.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:20 vm03 ceph-mon[57160]: osdmap e260: 8 total, 8 up, 8 in 2026-03-10T08:41:20.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:20 vm03 ceph-mon[50703]: pgmap v350: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 477 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:41:20.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:20 vm03 ceph-mon[50703]: osdmap e260: 8 total, 8 up, 8 in 2026-03-10T08:41:21.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:21 vm06 ceph-mon[54477]: osdmap e261: 8 total, 8 up, 8 in 2026-03-10T08:41:21.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:21 vm06 ceph-mon[54477]: pgmap v353: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 477 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:41:21.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:21 vm03 ceph-mon[57160]: osdmap e261: 8 total, 8 up, 8 in 2026-03-10T08:41:21.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:21 vm03 ceph-mon[57160]: pgmap v353: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 477 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:41:21.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:21 vm03 ceph-mon[50703]: osdmap e261: 8 total, 8 up, 8 in 2026-03-10T08:41:21.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:21 vm03 ceph-mon[50703]: pgmap v353: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 477 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:41:22.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:22 vm06 ceph-mon[54477]: osdmap e262: 8 total, 8 up, 8 in 2026-03-10T08:41:22.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:22 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/425714946' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:41:22.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:22 vm06 ceph-mon[54477]: from='client.24904 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:41:22.839 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:41:22 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a[78511]: debug there is no tcmu-runner data available 2026-03-10T08:41:22.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:22 vm03 ceph-mon[57160]: osdmap e262: 8 total, 8 up, 8 in 2026-03-10T08:41:22.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:22 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/425714946' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:41:22.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:22 vm03 ceph-mon[57160]: from='client.24904 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:41:22.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:22 vm03 ceph-mon[50703]: osdmap e262: 8 total, 8 up, 8 in 2026-03-10T08:41:22.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:22 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/425714946' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:41:22.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:22 vm03 ceph-mon[50703]: from='client.24904 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:41:23.519 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_write_no_comp_ref PASSED [ 70%] 2026-03-10T08:41:23.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:23 vm06 ceph-mon[54477]: from='client.24904 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:41:23.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:23 vm06 ceph-mon[54477]: osdmap e263: 8 total, 8 up, 8 in 2026-03-10T08:41:23.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:23 vm06 ceph-mon[54477]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:41:23.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:23 vm06 ceph-mon[54477]: pgmap v356: 196 pgs: 196 active+clean; 455 KiB data, 478 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T08:41:23.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:23 vm06 ceph-mon[54477]: osdmap e264: 8 total, 8 up, 8 in 2026-03-10T08:41:23.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:23 vm03 ceph-mon[57160]: from='client.24904 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:41:23.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:23 vm03 ceph-mon[57160]: osdmap e263: 8 total, 8 up, 8 in 2026-03-10T08:41:23.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:23 vm03 ceph-mon[57160]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:41:23.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:23 vm03 ceph-mon[57160]: pgmap v356: 196 pgs: 196 active+clean; 455 KiB data, 478 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T08:41:23.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:23 vm03 ceph-mon[57160]: osdmap e264: 8 total, 8 up, 8 in 2026-03-10T08:41:23.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:23 vm03 ceph-mon[50703]: from='client.24904 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:41:23.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:23 vm03 ceph-mon[50703]: osdmap e263: 8 total, 8 up, 8 in 2026-03-10T08:41:23.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:23 vm03 ceph-mon[50703]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:41:23.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:23 vm03 ceph-mon[50703]: pgmap v356: 196 pgs: 196 active+clean; 455 KiB data, 478 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T08:41:23.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:23 vm03 ceph-mon[50703]: osdmap e264: 8 total, 8 up, 8 in 2026-03-10T08:41:25.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:25 vm06 ceph-mon[54477]: osdmap e265: 8 total, 8 up, 8 in 2026-03-10T08:41:25.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:25 vm06 ceph-mon[54477]: pgmap v359: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 478 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:41:25.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:25 vm03 ceph-mon[57160]: osdmap e265: 8 total, 8 up, 8 in 2026-03-10T08:41:25.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:25 vm03 ceph-mon[57160]: pgmap v359: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 478 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:41:25.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:25 vm03 ceph-mon[50703]: osdmap e265: 8 total, 8 up, 8 in 2026-03-10T08:41:25.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:25 vm03 ceph-mon[50703]: pgmap v359: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 478 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:41:26.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:26 vm06 ceph-mon[54477]: osdmap e266: 8 total, 8 up, 8 in 2026-03-10T08:41:26.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:26 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/2401828001' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:41:26.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:26 vm03 ceph-mon[57160]: osdmap e266: 8 total, 8 up, 8 in 2026-03-10T08:41:26.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:26 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/2401828001' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:41:26.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:26 vm03 ceph-mon[50703]: osdmap e266: 8 total, 8 up, 8 in 2026-03-10T08:41:26.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:26 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/2401828001' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:41:27.587 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_append PASSED [ 71%] 2026-03-10T08:41:27.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:27 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/2401828001' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:41:27.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:27 vm03 ceph-mon[57160]: osdmap e267: 8 total, 8 up, 8 in 2026-03-10T08:41:27.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:27 vm03 ceph-mon[57160]: pgmap v362: 196 pgs: 196 active+clean; 455 KiB data, 478 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T08:41:27.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:27 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/2401828001' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:41:27.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:27 vm03 ceph-mon[50703]: osdmap e267: 8 total, 8 up, 8 in 2026-03-10T08:41:27.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:27 vm03 ceph-mon[50703]: pgmap v362: 196 pgs: 196 active+clean; 455 KiB data, 478 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T08:41:28.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:27 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/2401828001' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:41:28.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:27 vm06 ceph-mon[54477]: osdmap e267: 8 total, 8 up, 8 in 2026-03-10T08:41:28.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:27 vm06 ceph-mon[54477]: pgmap v362: 196 pgs: 196 active+clean; 455 KiB data, 478 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T08:41:28.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:28 vm03 ceph-mon[57160]: osdmap e268: 8 total, 8 up, 8 in 2026-03-10T08:41:28.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:28 vm03 ceph-mon[50703]: osdmap e268: 8 total, 8 up, 8 in 2026-03-10T08:41:29.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:28 vm06 ceph-mon[54477]: osdmap e268: 8 total, 8 up, 8 in 2026-03-10T08:41:29.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:29 vm03 ceph-mon[57160]: osdmap e269: 8 total, 8 up, 8 in 2026-03-10T08:41:29.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:29 vm03 ceph-mon[57160]: pgmap v365: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 478 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:41:29.928 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:41:29 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: ::ffff:192.168.123.106 - - [10/Mar/2026:08:41:29] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T08:41:29.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:29 vm03 ceph-mon[50703]: osdmap e269: 8 total, 8 up, 8 in 2026-03-10T08:41:29.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:29 vm03 ceph-mon[50703]: pgmap v365: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 478 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:41:30.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:29 vm06 ceph-mon[54477]: osdmap e269: 8 total, 8 up, 8 in 2026-03-10T08:41:30.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:29 vm06 ceph-mon[54477]: pgmap v365: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 478 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:41:30.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:30 vm03 ceph-mon[57160]: osdmap e270: 8 total, 8 up, 8 in 2026-03-10T08:41:30.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:30 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/1105086446' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:41:30.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:30 vm03 ceph-mon[57160]: from='client.24910 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:41:30.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:30 vm03 ceph-mon[50703]: osdmap e270: 8 total, 8 up, 8 in 2026-03-10T08:41:30.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:30 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/1105086446' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:41:30.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:30 vm03 ceph-mon[50703]: from='client.24910 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:41:31.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:30 vm06 ceph-mon[54477]: osdmap e270: 8 total, 8 up, 8 in 2026-03-10T08:41:31.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:30 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/1105086446' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:41:31.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:30 vm06 ceph-mon[54477]: from='client.24910 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:41:31.663 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_write_full PASSED [ 72%] 2026-03-10T08:41:31.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:31 vm03 ceph-mon[57160]: from='client.24910 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:41:31.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:31 vm03 ceph-mon[57160]: osdmap e271: 8 total, 8 up, 8 in 2026-03-10T08:41:31.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:31 vm03 ceph-mon[57160]: pgmap v368: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 478 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:41:31.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:31 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:41:31.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:31 vm03 ceph-mon[50703]: from='client.24910 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:41:31.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:31 vm03 ceph-mon[50703]: osdmap e271: 8 total, 8 up, 8 in 2026-03-10T08:41:31.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:31 vm03 ceph-mon[50703]: pgmap v368: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 478 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:41:31.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:31 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:41:32.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:31 vm06 ceph-mon[54477]: from='client.24910 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:41:32.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:31 vm06 ceph-mon[54477]: osdmap e271: 8 total, 8 up, 8 in 2026-03-10T08:41:32.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:31 vm06 ceph-mon[54477]: pgmap v368: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 478 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:41:32.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:31 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:41:32.839 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:41:32 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a[78511]: debug there is no tcmu-runner data available 2026-03-10T08:41:32.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:32 vm06 ceph-mon[54477]: osdmap e272: 8 total, 8 up, 8 in 2026-03-10T08:41:32.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:32 vm03 ceph-mon[57160]: osdmap e272: 8 total, 8 up, 8 in 2026-03-10T08:41:32.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:32 vm03 ceph-mon[50703]: osdmap e272: 8 total, 8 up, 8 in 2026-03-10T08:41:33.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:33 vm03 ceph-mon[57160]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:41:33.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:33 vm03 ceph-mon[57160]: osdmap e273: 8 total, 8 up, 8 in 2026-03-10T08:41:33.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:33 vm03 ceph-mon[57160]: pgmap v371: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 479 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:41:33.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:33 vm03 ceph-mon[50703]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:41:33.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:33 vm03 ceph-mon[50703]: osdmap e273: 8 total, 8 up, 8 in 2026-03-10T08:41:33.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:33 vm03 ceph-mon[50703]: pgmap v371: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 479 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:41:34.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:33 vm06 ceph-mon[54477]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:41:34.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:33 vm06 ceph-mon[54477]: osdmap e273: 8 total, 8 up, 8 in 2026-03-10T08:41:34.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:33 vm06 ceph-mon[54477]: pgmap v371: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 479 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:41:34.682 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:34 vm03 ceph-mon[50703]: osdmap e274: 8 total, 8 up, 8 in 2026-03-10T08:41:34.682 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:34 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/3435849547' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:41:34.688 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:34 vm03 ceph-mon[57160]: osdmap e274: 8 total, 8 up, 8 in 2026-03-10T08:41:34.688 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:34 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/3435849547' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:41:35.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:34 vm06 ceph-mon[54477]: osdmap e274: 8 total, 8 up, 8 in 2026-03-10T08:41:35.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:34 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/3435849547' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:41:35.710 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_writesame PASSED [ 73%] 2026-03-10T08:41:36.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:35 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/3435849547' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:41:36.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:35 vm06 ceph-mon[54477]: osdmap e275: 8 total, 8 up, 8 in 2026-03-10T08:41:36.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:35 vm06 ceph-mon[54477]: pgmap v374: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 479 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:41:36.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:35 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/3435849547' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:41:36.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:35 vm03 ceph-mon[57160]: osdmap e275: 8 total, 8 up, 8 in 2026-03-10T08:41:36.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:35 vm03 ceph-mon[57160]: pgmap v374: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 479 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:41:36.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:35 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/3435849547' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:41:36.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:35 vm03 ceph-mon[50703]: osdmap e275: 8 total, 8 up, 8 in 2026-03-10T08:41:36.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:35 vm03 ceph-mon[50703]: pgmap v374: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 479 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:41:36.990 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:36 vm06 ceph-mon[54477]: osdmap e276: 8 total, 8 up, 8 in 2026-03-10T08:41:37.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:36 vm03 ceph-mon[57160]: osdmap e276: 8 total, 8 up, 8 in 2026-03-10T08:41:37.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:36 vm03 ceph-mon[50703]: osdmap e276: 8 total, 8 up, 8 in 2026-03-10T08:41:38.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:37 vm06 ceph-mon[54477]: osdmap e277: 8 total, 8 up, 8 in 2026-03-10T08:41:38.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:37 vm06 ceph-mon[54477]: pgmap v377: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 479 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:41:38.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:37 vm03 ceph-mon[57160]: osdmap e277: 8 total, 8 up, 8 in 2026-03-10T08:41:38.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:37 vm03 ceph-mon[57160]: pgmap v377: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 479 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:41:38.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:37 vm03 ceph-mon[50703]: osdmap e277: 8 total, 8 up, 8 in 2026-03-10T08:41:38.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:37 vm03 ceph-mon[50703]: pgmap v377: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 479 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:41:39.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:38 vm06 ceph-mon[54477]: osdmap e278: 8 total, 8 up, 8 in 2026-03-10T08:41:39.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:38 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/2647900897' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:41:39.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:38 vm06 ceph-mon[54477]: from='client.24890 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:41:39.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:38 vm06 ceph-mon[54477]: from='client.24890 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:41:39.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:38 vm06 ceph-mon[54477]: osdmap e279: 8 total, 8 up, 8 in 2026-03-10T08:41:39.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:38 vm03 ceph-mon[57160]: osdmap e278: 8 total, 8 up, 8 in 2026-03-10T08:41:39.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:38 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/2647900897' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:41:39.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:38 vm03 ceph-mon[57160]: from='client.24890 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:41:39.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:38 vm03 ceph-mon[57160]: from='client.24890 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:41:39.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:38 vm03 ceph-mon[57160]: osdmap e279: 8 total, 8 up, 8 in 2026-03-10T08:41:39.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:38 vm03 ceph-mon[50703]: osdmap e278: 8 total, 8 up, 8 in 2026-03-10T08:41:39.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:38 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/2647900897' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:41:39.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:38 vm03 ceph-mon[50703]: from='client.24890 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:41:39.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:38 vm03 ceph-mon[50703]: from='client.24890 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:41:39.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:38 vm03 ceph-mon[50703]: osdmap e279: 8 total, 8 up, 8 in 2026-03-10T08:41:39.743 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_stat PASSED [ 74%] 2026-03-10T08:41:39.781 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:41:39 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: ::ffff:192.168.123.106 - - [10/Mar/2026:08:41:39] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T08:41:40.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:39 vm06 ceph-mon[54477]: pgmap v380: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 479 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:41:40.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:39 vm06 ceph-mon[54477]: osdmap e280: 8 total, 8 up, 8 in 2026-03-10T08:41:40.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:39 vm03 ceph-mon[57160]: pgmap v380: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 479 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:41:40.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:39 vm03 ceph-mon[57160]: osdmap e280: 8 total, 8 up, 8 in 2026-03-10T08:41:40.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:39 vm03 ceph-mon[50703]: pgmap v380: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 479 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:41:40.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:39 vm03 ceph-mon[50703]: osdmap e280: 8 total, 8 up, 8 in 2026-03-10T08:41:42.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:41 vm06 ceph-mon[54477]: osdmap e281: 8 total, 8 up, 8 in 2026-03-10T08:41:42.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:41 vm06 ceph-mon[54477]: pgmap v383: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 479 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:41:42.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:41 vm03 ceph-mon[57160]: osdmap e281: 8 total, 8 up, 8 in 2026-03-10T08:41:42.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:41 vm03 ceph-mon[57160]: pgmap v383: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 479 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:41:42.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:41 vm03 ceph-mon[50703]: osdmap e281: 8 total, 8 up, 8 in 2026-03-10T08:41:42.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:41 vm03 ceph-mon[50703]: pgmap v383: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 479 MiB used, 160 GiB / 160 GiB avail 2026-03-10T08:41:42.839 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:41:42 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a[78511]: debug there is no tcmu-runner data available 2026-03-10T08:41:43.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:42 vm03 ceph-mon[57160]: osdmap e282: 8 total, 8 up, 8 in 2026-03-10T08:41:43.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:42 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/551968453' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:41:43.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:42 vm03 ceph-mon[57160]: from='client.24925 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:41:43.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:42 vm03 ceph-mon[50703]: osdmap e282: 8 total, 8 up, 8 in 2026-03-10T08:41:43.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:42 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/551968453' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:41:43.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:42 vm03 ceph-mon[50703]: from='client.24925 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:41:43.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:42 vm06 ceph-mon[54477]: osdmap e282: 8 total, 8 up, 8 in 2026-03-10T08:41:43.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:42 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/551968453' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:41:43.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:42 vm06 ceph-mon[54477]: from='client.24925 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:41:43.902 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_remove PASSED [ 75%] 2026-03-10T08:41:44.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:43 vm03 ceph-mon[57160]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:41:44.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:43 vm03 ceph-mon[57160]: from='client.24925 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:41:44.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:43 vm03 ceph-mon[57160]: osdmap e283: 8 total, 8 up, 8 in 2026-03-10T08:41:44.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:43 vm03 ceph-mon[57160]: pgmap v386: 196 pgs: 196 active+clean; 455 KiB data, 480 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T08:41:44.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:43 vm03 ceph-mon[50703]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:41:44.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:43 vm03 ceph-mon[50703]: from='client.24925 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:41:44.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:43 vm03 ceph-mon[50703]: osdmap e283: 8 total, 8 up, 8 in 2026-03-10T08:41:44.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:43 vm03 ceph-mon[50703]: pgmap v386: 196 pgs: 196 active+clean; 455 KiB data, 480 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T08:41:44.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:43 vm06 ceph-mon[54477]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:41:44.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:43 vm06 ceph-mon[54477]: from='client.24925 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:41:44.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:43 vm06 ceph-mon[54477]: osdmap e283: 8 total, 8 up, 8 in 2026-03-10T08:41:44.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:43 vm06 ceph-mon[54477]: pgmap v386: 196 pgs: 196 active+clean; 455 KiB data, 480 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T08:41:45.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:44 vm03 ceph-mon[57160]: osdmap e284: 8 total, 8 up, 8 in 2026-03-10T08:41:45.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:44 vm03 ceph-mon[50703]: osdmap e284: 8 total, 8 up, 8 in 2026-03-10T08:41:45.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:44 vm06 ceph-mon[54477]: osdmap e284: 8 total, 8 up, 8 in 2026-03-10T08:41:46.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:46 vm06 ceph-mon[54477]: osdmap e285: 8 total, 8 up, 8 in 2026-03-10T08:41:46.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:46 vm06 ceph-mon[54477]: pgmap v389: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 480 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:41:46.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:46 vm03 ceph-mon[57160]: osdmap e285: 8 total, 8 up, 8 in 2026-03-10T08:41:46.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:46 vm03 ceph-mon[57160]: pgmap v389: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 480 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:41:46.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:46 vm03 ceph-mon[50703]: osdmap e285: 8 total, 8 up, 8 in 2026-03-10T08:41:46.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:46 vm03 ceph-mon[50703]: pgmap v389: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 480 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:41:47.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:47 vm06 ceph-mon[54477]: osdmap e286: 8 total, 8 up, 8 in 2026-03-10T08:41:47.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:47 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/606076103' entity='client.admin' cmd=[{"prefix": "osd map", "pool": "test_pool", "object": "foo", "format": "json"}]: dispatch 2026-03-10T08:41:47.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:47 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/606076103' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-10T08:41:47.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:47 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:41:47.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:47 vm03 ceph-mon[57160]: osdmap e286: 8 total, 8 up, 8 in 2026-03-10T08:41:47.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:47 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/606076103' entity='client.admin' cmd=[{"prefix": "osd map", "pool": "test_pool", "object": "foo", "format": "json"}]: dispatch 2026-03-10T08:41:47.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:47 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/606076103' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-10T08:41:47.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:47 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:41:47.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:47 vm03 ceph-mon[50703]: osdmap e286: 8 total, 8 up, 8 in 2026-03-10T08:41:47.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:47 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/606076103' entity='client.admin' cmd=[{"prefix": "osd map", "pool": "test_pool", "object": "foo", "format": "json"}]: dispatch 2026-03-10T08:41:47.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:47 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/606076103' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-10T08:41:47.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:47 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:41:47.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:47 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mon-a[50699]: 2026-03-10T08:41:47.208+0000 7f8dbdb2b640 -1 mon.a@0(leader).osd e287 definitely_dead 0 2026-03-10T08:41:48.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:48 vm06 ceph-mon[54477]: Health check failed: noup flag(s) set (OSDMAP_FLAGS) 2026-03-10T08:41:48.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:48 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/606076103' entity='client.admin' cmd='[{"prefix": "osd set", "key": "noup"}]': finished 2026-03-10T08:41:48.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:48 vm06 ceph-mon[54477]: osdmap e287: 8 total, 8 up, 8 in 2026-03-10T08:41:48.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:48 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/606076103' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["4", "0", "7"]}]: dispatch 2026-03-10T08:41:48.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:48 vm06 ceph-mon[54477]: pgmap v392: 196 pgs: 8 creating+activating, 188 active+clean; 455 KiB data, 480 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T08:41:48.677 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:48 vm03 ceph-mon[57160]: Health check failed: noup flag(s) set (OSDMAP_FLAGS) 2026-03-10T08:41:48.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:48 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/606076103' entity='client.admin' cmd='[{"prefix": "osd set", "key": "noup"}]': finished 2026-03-10T08:41:48.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:48 vm03 ceph-mon[57160]: osdmap e287: 8 total, 8 up, 8 in 2026-03-10T08:41:48.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:48 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/606076103' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["4", "0", "7"]}]: dispatch 2026-03-10T08:41:48.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:48 vm03 ceph-mon[57160]: pgmap v392: 196 pgs: 8 creating+activating, 188 active+clean; 455 KiB data, 480 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T08:41:48.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:48 vm03 ceph-mon[50703]: Health check failed: noup flag(s) set (OSDMAP_FLAGS) 2026-03-10T08:41:48.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:48 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/606076103' entity='client.admin' cmd='[{"prefix": "osd set", "key": "noup"}]': finished 2026-03-10T08:41:48.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:48 vm03 ceph-mon[50703]: osdmap e287: 8 total, 8 up, 8 in 2026-03-10T08:41:48.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:48 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/606076103' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["4", "0", "7"]}]: dispatch 2026-03-10T08:41:48.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:48 vm03 ceph-mon[50703]: pgmap v392: 196 pgs: 8 creating+activating, 188 active+clean; 455 KiB data, 480 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T08:41:49.526 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:49 vm03 ceph-mon[57160]: Health check failed: 3 osds down (OSD_DOWN) 2026-03-10T08:41:49.527 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:49 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/606076103' entity='client.admin' cmd='[{"prefix": "osd down", "ids": ["4", "0", "7"]}]': finished 2026-03-10T08:41:49.527 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:49 vm03 ceph-mon[57160]: osdmap e288: 8 total, 5 up, 8 in 2026-03-10T08:41:49.527 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:49 vm03 ceph-mon[57160]: osd.0 marked itself dead as of e288 2026-03-10T08:41:49.527 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:49 vm03 ceph-mon[50703]: Health check failed: 3 osds down (OSD_DOWN) 2026-03-10T08:41:49.527 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:49 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/606076103' entity='client.admin' cmd='[{"prefix": "osd down", "ids": ["4", "0", "7"]}]': finished 2026-03-10T08:41:49.527 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:49 vm03 ceph-mon[50703]: osdmap e288: 8 total, 5 up, 8 in 2026-03-10T08:41:49.527 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:49 vm03 ceph-mon[50703]: osd.0 marked itself dead as of e288 2026-03-10T08:41:49.557 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:49 vm06 ceph-mon[54477]: Health check failed: 3 osds down (OSD_DOWN) 2026-03-10T08:41:49.557 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:49 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/606076103' entity='client.admin' cmd='[{"prefix": "osd down", "ids": ["4", "0", "7"]}]': finished 2026-03-10T08:41:49.557 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:49 vm06 ceph-mon[54477]: osdmap e288: 8 total, 5 up, 8 in 2026-03-10T08:41:49.557 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:49 vm06 ceph-mon[54477]: osd.0 marked itself dead as of e288 2026-03-10T08:41:49.839 INFO:journalctl@ceph.osd.4.vm06.stdout:Mar 10 08:41:49 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-4[59063]: 2026-03-10T08:41:49.554+0000 7fe825377640 -1 osd.4 289 osdmap NOUP flag is set, waiting for it to clear 2026-03-10T08:41:49.839 INFO:journalctl@ceph.osd.7.vm06.stdout:Mar 10 08:41:49 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-7[74328]: 2026-03-10T08:41:49.633+0000 7f788ce0c640 -1 osd.7 289 osdmap NOUP flag is set, waiting for it to clear 2026-03-10T08:41:49.928 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 10 08:41:49 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-0[61070]: 2026-03-10T08:41:49.558+0000 7f8c4cf3e640 -1 osd.0 289 osdmap NOUP flag is set, waiting for it to clear 2026-03-10T08:41:49.928 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:41:49 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: ::ffff:192.168.123.106 - - [10/Mar/2026:08:41:49] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T08:41:50.589 INFO:journalctl@ceph.osd.4.vm06.stdout:Mar 10 08:41:50 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-4[59063]: 2026-03-10T08:41:50.219+0000 7fe817f53640 -1 osd.4 290 osdmap NOUP flag is set, waiting for it to clear 2026-03-10T08:41:50.589 INFO:journalctl@ceph.osd.7.vm06.stdout:Mar 10 08:41:50 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-7[74328]: 2026-03-10T08:41:50.228+0000 7f787f9e8640 -1 osd.7 290 osdmap NOUP flag is set, waiting for it to clear 2026-03-10T08:41:50.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:50 vm06 ceph-mon[54477]: Monitor daemon marked osd.0 down, but it is still running 2026-03-10T08:41:50.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:50 vm06 ceph-mon[54477]: map e288 wrongly marked me down at e288 2026-03-10T08:41:50.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:50 vm06 ceph-mon[54477]: osdmap e289: 8 total, 5 up, 8 in 2026-03-10T08:41:50.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:50 vm06 ceph-mon[54477]: pgmap v395: 196 pgs: 1 stale+creating+activating, 83 stale+active+clean, 7 creating+activating, 105 active+clean; 455 KiB data, 480 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T08:41:50.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:50 vm06 ceph-mon[54477]: Monitor daemon marked osd.4 down, but it is still running 2026-03-10T08:41:50.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:50 vm06 ceph-mon[54477]: map e289 wrongly marked me down at e288 2026-03-10T08:41:50.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:50 vm06 ceph-mon[54477]: osd.4 marked itself dead as of e289 2026-03-10T08:41:50.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:50 vm06 ceph-mon[54477]: Monitor daemon marked osd.7 down, but it is still running 2026-03-10T08:41:50.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:50 vm06 ceph-mon[54477]: map e289 wrongly marked me down at e288 2026-03-10T08:41:50.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:50 vm06 ceph-mon[54477]: osd.7 marked itself dead as of e289 2026-03-10T08:41:50.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:50 vm03 ceph-mon[57160]: Monitor daemon marked osd.0 down, but it is still running 2026-03-10T08:41:50.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:50 vm03 ceph-mon[57160]: map e288 wrongly marked me down at e288 2026-03-10T08:41:50.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:50 vm03 ceph-mon[57160]: osdmap e289: 8 total, 5 up, 8 in 2026-03-10T08:41:50.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:50 vm03 ceph-mon[57160]: pgmap v395: 196 pgs: 1 stale+creating+activating, 83 stale+active+clean, 7 creating+activating, 105 active+clean; 455 KiB data, 480 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T08:41:50.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:50 vm03 ceph-mon[57160]: Monitor daemon marked osd.4 down, but it is still running 2026-03-10T08:41:50.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:50 vm03 ceph-mon[57160]: map e289 wrongly marked me down at e288 2026-03-10T08:41:50.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:50 vm03 ceph-mon[57160]: osd.4 marked itself dead as of e289 2026-03-10T08:41:50.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:50 vm03 ceph-mon[57160]: Monitor daemon marked osd.7 down, but it is still running 2026-03-10T08:41:50.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:50 vm03 ceph-mon[57160]: map e289 wrongly marked me down at e288 2026-03-10T08:41:50.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:50 vm03 ceph-mon[57160]: osd.7 marked itself dead as of e289 2026-03-10T08:41:50.678 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 10 08:41:50 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-0[61070]: 2026-03-10T08:41:50.221+0000 7f8c3fb1a640 -1 osd.0 290 osdmap NOUP flag is set, waiting for it to clear 2026-03-10T08:41:50.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:50 vm03 ceph-mon[50703]: Monitor daemon marked osd.0 down, but it is still running 2026-03-10T08:41:50.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:50 vm03 ceph-mon[50703]: map e288 wrongly marked me down at e288 2026-03-10T08:41:50.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:50 vm03 ceph-mon[50703]: osdmap e289: 8 total, 5 up, 8 in 2026-03-10T08:41:50.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:50 vm03 ceph-mon[50703]: pgmap v395: 196 pgs: 1 stale+creating+activating, 83 stale+active+clean, 7 creating+activating, 105 active+clean; 455 KiB data, 480 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T08:41:50.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:50 vm03 ceph-mon[50703]: Monitor daemon marked osd.4 down, but it is still running 2026-03-10T08:41:50.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:50 vm03 ceph-mon[50703]: map e289 wrongly marked me down at e288 2026-03-10T08:41:50.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:50 vm03 ceph-mon[50703]: osd.4 marked itself dead as of e289 2026-03-10T08:41:50.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:50 vm03 ceph-mon[50703]: Monitor daemon marked osd.7 down, but it is still running 2026-03-10T08:41:50.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:50 vm03 ceph-mon[50703]: map e289 wrongly marked me down at e288 2026-03-10T08:41:50.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:50 vm03 ceph-mon[50703]: osd.7 marked itself dead as of e289 2026-03-10T08:41:51.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:51 vm06 ceph-mon[54477]: osdmap e290: 8 total, 5 up, 8 in 2026-03-10T08:41:51.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:51 vm03 ceph-mon[57160]: osdmap e290: 8 total, 5 up, 8 in 2026-03-10T08:41:51.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:51 vm03 ceph-mon[50703]: osdmap e290: 8 total, 5 up, 8 in 2026-03-10T08:41:52.543 INFO:journalctl@ceph.osd.4.vm06.stdout:Mar 10 08:41:52 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-4[59063]: 2026-03-10T08:41:52.240+0000 7fe82018d640 -1 osd.4 291 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T08:41:52.543 INFO:journalctl@ceph.osd.7.vm06.stdout:Mar 10 08:41:52 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-7[74328]: 2026-03-10T08:41:52.239+0000 7f7887c22640 -1 osd.7 291 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T08:41:52.543 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:52 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/606076103' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:41:52.543 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:52 vm06 ceph-mon[54477]: pgmap v397: 196 pgs: 1 stale+creating+activating, 83 stale+active+clean, 7 creating+activating, 105 active+clean; 455 KiB data, 480 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 247 B/s wr, 1 op/s 2026-03-10T08:41:52.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:52 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/606076103' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:41:52.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:52 vm03 ceph-mon[57160]: pgmap v397: 196 pgs: 1 stale+creating+activating, 83 stale+active+clean, 7 creating+activating, 105 active+clean; 455 KiB data, 480 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 247 B/s wr, 1 op/s 2026-03-10T08:41:52.678 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 10 08:41:52 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-0[61070]: 2026-03-10T08:41:52.233+0000 7f8c47d54640 -1 osd.0 291 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T08:41:52.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:52 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/606076103' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:41:52.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:52 vm03 ceph-mon[50703]: pgmap v397: 196 pgs: 1 stale+creating+activating, 83 stale+active+clean, 7 creating+activating, 105 active+clean; 455 KiB data, 480 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 247 B/s wr, 1 op/s 2026-03-10T08:41:52.839 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:41:52 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a[78511]: debug there is no tcmu-runner data available 2026-03-10T08:41:53.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:53 vm06 ceph-mon[54477]: Health check cleared: OSDMAP_FLAGS (was: noup flag(s) set) 2026-03-10T08:41:53.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:53 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/606076103' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:41:53.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:53 vm06 ceph-mon[54477]: osdmap e291: 8 total, 5 up, 8 in 2026-03-10T08:41:53.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:53 vm03 ceph-mon[57160]: Health check cleared: OSDMAP_FLAGS (was: noup flag(s) set) 2026-03-10T08:41:53.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:53 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/606076103' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:41:53.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:53 vm03 ceph-mon[57160]: osdmap e291: 8 total, 5 up, 8 in 2026-03-10T08:41:53.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:53 vm03 ceph-mon[50703]: Health check cleared: OSDMAP_FLAGS (was: noup flag(s) set) 2026-03-10T08:41:53.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:53 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/606076103' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:41:53.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:53 vm03 ceph-mon[50703]: osdmap e291: 8 total, 5 up, 8 in 2026-03-10T08:41:54.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:54 vm06 ceph-mon[54477]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:41:54.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:54 vm06 ceph-mon[54477]: Health check cleared: OSD_DOWN (was: 3 osds down) 2026-03-10T08:41:54.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:54 vm06 ceph-mon[54477]: osd.0 v1:192.168.123.103:6801/3555379361 boot 2026-03-10T08:41:54.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:54 vm06 ceph-mon[54477]: osd.4 v1:192.168.123.106:6800/4000324195 boot 2026-03-10T08:41:54.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:54 vm06 ceph-mon[54477]: osd.7 v1:192.168.123.106:6812/1491932823 boot 2026-03-10T08:41:54.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:54 vm06 ceph-mon[54477]: osdmap e292: 8 total, 8 up, 8 in 2026-03-10T08:41:54.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:54 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T08:41:54.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:54 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T08:41:54.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:54 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T08:41:54.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:54 vm06 ceph-mon[54477]: pgmap v400: 196 pgs: 73 active+undersized, 44 undersized+peered, 4 stale+active+clean, 29 active+undersized+degraded, 13 undersized+degraded+peered, 33 active+clean; 455 KiB data, 481 MiB used, 159 GiB / 160 GiB avail; 215/600 objects degraded (35.833%) 2026-03-10T08:41:54.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:54 vm03 ceph-mon[57160]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:41:54.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:54 vm03 ceph-mon[57160]: Health check cleared: OSD_DOWN (was: 3 osds down) 2026-03-10T08:41:54.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:54 vm03 ceph-mon[57160]: osd.0 v1:192.168.123.103:6801/3555379361 boot 2026-03-10T08:41:54.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:54 vm03 ceph-mon[57160]: osd.4 v1:192.168.123.106:6800/4000324195 boot 2026-03-10T08:41:54.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:54 vm03 ceph-mon[57160]: osd.7 v1:192.168.123.106:6812/1491932823 boot 2026-03-10T08:41:54.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:54 vm03 ceph-mon[57160]: osdmap e292: 8 total, 8 up, 8 in 2026-03-10T08:41:54.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:54 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T08:41:54.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:54 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T08:41:54.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:54 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T08:41:54.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:54 vm03 ceph-mon[57160]: pgmap v400: 196 pgs: 73 active+undersized, 44 undersized+peered, 4 stale+active+clean, 29 active+undersized+degraded, 13 undersized+degraded+peered, 33 active+clean; 455 KiB data, 481 MiB used, 159 GiB / 160 GiB avail; 215/600 objects degraded (35.833%) 2026-03-10T08:41:54.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:54 vm03 ceph-mon[50703]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:41:54.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:54 vm03 ceph-mon[50703]: Health check cleared: OSD_DOWN (was: 3 osds down) 2026-03-10T08:41:54.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:54 vm03 ceph-mon[50703]: osd.0 v1:192.168.123.103:6801/3555379361 boot 2026-03-10T08:41:54.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:54 vm03 ceph-mon[50703]: osd.4 v1:192.168.123.106:6800/4000324195 boot 2026-03-10T08:41:54.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:54 vm03 ceph-mon[50703]: osd.7 v1:192.168.123.106:6812/1491932823 boot 2026-03-10T08:41:54.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:54 vm03 ceph-mon[50703]: osdmap e292: 8 total, 8 up, 8 in 2026-03-10T08:41:54.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:54 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T08:41:54.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:54 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T08:41:54.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:54 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T08:41:54.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:54 vm03 ceph-mon[50703]: pgmap v400: 196 pgs: 73 active+undersized, 44 undersized+peered, 4 stale+active+clean, 29 active+undersized+degraded, 13 undersized+degraded+peered, 33 active+clean; 455 KiB data, 481 MiB used, 159 GiB / 160 GiB avail; 215/600 objects degraded (35.833%) 2026-03-10T08:41:55.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:55 vm06 ceph-mon[54477]: Health check failed: Reduced data availability: 26 pgs inactive (PG_AVAILABILITY) 2026-03-10T08:41:55.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:55 vm06 ceph-mon[54477]: Health check failed: Degraded data redundancy: 215/600 objects degraded (35.833%), 42 pgs degraded (PG_DEGRADED) 2026-03-10T08:41:55.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:55 vm06 ceph-mon[54477]: osdmap e293: 8 total, 8 up, 8 in 2026-03-10T08:41:55.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:55 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/606076103' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:41:55.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:55 vm03 ceph-mon[57160]: Health check failed: Reduced data availability: 26 pgs inactive (PG_AVAILABILITY) 2026-03-10T08:41:55.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:55 vm03 ceph-mon[57160]: Health check failed: Degraded data redundancy: 215/600 objects degraded (35.833%), 42 pgs degraded (PG_DEGRADED) 2026-03-10T08:41:55.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:55 vm03 ceph-mon[57160]: osdmap e293: 8 total, 8 up, 8 in 2026-03-10T08:41:55.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:55 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/606076103' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:41:55.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:55 vm03 ceph-mon[50703]: Health check failed: Reduced data availability: 26 pgs inactive (PG_AVAILABILITY) 2026-03-10T08:41:55.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:55 vm03 ceph-mon[50703]: Health check failed: Degraded data redundancy: 215/600 objects degraded (35.833%), 42 pgs degraded (PG_DEGRADED) 2026-03-10T08:41:55.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:55 vm03 ceph-mon[50703]: osdmap e293: 8 total, 8 up, 8 in 2026-03-10T08:41:55.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:55 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/606076103' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:41:56.347 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_read_wait_for_complete PASSED [ 76%] 2026-03-10T08:41:56.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:56 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/606076103' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:41:56.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:56 vm03 ceph-mon[57160]: osdmap e294: 8 total, 8 up, 8 in 2026-03-10T08:41:56.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:56 vm03 ceph-mon[57160]: pgmap v403: 196 pgs: 73 active+undersized, 44 undersized+peered, 4 stale+active+clean, 29 active+undersized+degraded, 13 undersized+degraded+peered, 33 active+clean; 455 KiB data, 481 MiB used, 159 GiB / 160 GiB avail; 215/600 objects degraded (35.833%) 2026-03-10T08:41:56.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:56 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/606076103' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:41:56.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:56 vm03 ceph-mon[50703]: osdmap e294: 8 total, 8 up, 8 in 2026-03-10T08:41:56.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:56 vm03 ceph-mon[50703]: pgmap v403: 196 pgs: 73 active+undersized, 44 undersized+peered, 4 stale+active+clean, 29 active+undersized+degraded, 13 undersized+degraded+peered, 33 active+clean; 455 KiB data, 481 MiB used, 159 GiB / 160 GiB avail; 215/600 objects degraded (35.833%) 2026-03-10T08:41:56.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:56 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/606076103' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:41:56.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:56 vm06 ceph-mon[54477]: osdmap e294: 8 total, 8 up, 8 in 2026-03-10T08:41:56.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:56 vm06 ceph-mon[54477]: pgmap v403: 196 pgs: 73 active+undersized, 44 undersized+peered, 4 stale+active+clean, 29 active+undersized+degraded, 13 undersized+degraded+peered, 33 active+clean; 455 KiB data, 481 MiB used, 159 GiB / 160 GiB avail; 215/600 objects degraded (35.833%) 2026-03-10T08:41:57.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:57 vm03 ceph-mon[57160]: osdmap e295: 8 total, 8 up, 8 in 2026-03-10T08:41:57.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:57 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:41:57.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:57 vm03 ceph-mon[50703]: osdmap e295: 8 total, 8 up, 8 in 2026-03-10T08:41:57.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:57 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:41:57.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:57 vm06 ceph-mon[54477]: osdmap e295: 8 total, 8 up, 8 in 2026-03-10T08:41:57.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:57 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:41:58.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:58 vm03 ceph-mon[57160]: pgmap v405: 164 pgs: 164 active+clean; 455 KiB data, 481 MiB used, 159 GiB / 160 GiB avail; 85 KiB/s rd, 85 op/s 2026-03-10T08:41:58.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:58 vm03 ceph-mon[57160]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:41:58.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:58 vm03 ceph-mon[57160]: Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 26 pgs inactive) 2026-03-10T08:41:58.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:58 vm03 ceph-mon[57160]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 215/600 objects degraded (35.833%), 42 pgs degraded) 2026-03-10T08:41:58.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:58 vm03 ceph-mon[57160]: osdmap e296: 8 total, 8 up, 8 in 2026-03-10T08:41:58.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:58 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:41:58.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:58 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:41:58.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:58 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:41:58.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:58 vm03 ceph-mon[50703]: pgmap v405: 164 pgs: 164 active+clean; 455 KiB data, 481 MiB used, 159 GiB / 160 GiB avail; 85 KiB/s rd, 85 op/s 2026-03-10T08:41:58.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:58 vm03 ceph-mon[50703]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:41:58.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:58 vm03 ceph-mon[50703]: Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 26 pgs inactive) 2026-03-10T08:41:58.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:58 vm03 ceph-mon[50703]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 215/600 objects degraded (35.833%), 42 pgs degraded) 2026-03-10T08:41:58.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:58 vm03 ceph-mon[50703]: osdmap e296: 8 total, 8 up, 8 in 2026-03-10T08:41:58.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:58 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:41:58.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:58 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:41:58.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:58 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:41:58.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:58 vm06 ceph-mon[54477]: pgmap v405: 164 pgs: 164 active+clean; 455 KiB data, 481 MiB used, 159 GiB / 160 GiB avail; 85 KiB/s rd, 85 op/s 2026-03-10T08:41:58.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:58 vm06 ceph-mon[54477]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:41:58.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:58 vm06 ceph-mon[54477]: Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 26 pgs inactive) 2026-03-10T08:41:58.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:58 vm06 ceph-mon[54477]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 215/600 objects degraded (35.833%), 42 pgs degraded) 2026-03-10T08:41:58.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:58 vm06 ceph-mon[54477]: osdmap e296: 8 total, 8 up, 8 in 2026-03-10T08:41:58.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:58 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:41:58.840 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:58 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:41:58.840 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:58 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:41:59.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:59 vm03 ceph-mon[57160]: osdmap e297: 8 total, 8 up, 8 in 2026-03-10T08:41:59.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:59 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/174788168' entity='client.admin' cmd=[{"prefix": "osd map", "pool": "test_pool", "object": "foo", "format": "json"}]: dispatch 2026-03-10T08:41:59.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:59 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/174788168' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-10T08:41:59.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:59 vm03 ceph-mon[57160]: from='client.24902 ' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-10T08:41:59.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:59 vm03 ceph-mon[57160]: Health check failed: noup flag(s) set (OSDMAP_FLAGS) 2026-03-10T08:41:59.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:59 vm03 ceph-mon[57160]: from='client.24902 ' entity='client.admin' cmd='[{"prefix": "osd set", "key": "noup"}]': finished 2026-03-10T08:41:59.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:59 vm03 ceph-mon[57160]: osdmap e298: 8 total, 8 up, 8 in 2026-03-10T08:41:59.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:59 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/174788168' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["2", "5", "7"]}]: dispatch 2026-03-10T08:41:59.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:41:59 vm03 ceph-mon[57160]: from='client.24902 ' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["2", "5", "7"]}]: dispatch 2026-03-10T08:41:59.678 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:41:59 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: ::ffff:192.168.123.106 - - [10/Mar/2026:08:41:59] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T08:41:59.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:59 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mon-a[50699]: 2026-03-10T08:41:59.373+0000 7f8dbdb2b640 -1 mon.a@0(leader).osd e298 definitely_dead 0 2026-03-10T08:41:59.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:59 vm03 ceph-mon[50703]: osdmap e297: 8 total, 8 up, 8 in 2026-03-10T08:41:59.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:59 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/174788168' entity='client.admin' cmd=[{"prefix": "osd map", "pool": "test_pool", "object": "foo", "format": "json"}]: dispatch 2026-03-10T08:41:59.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:59 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/174788168' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-10T08:41:59.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:59 vm03 ceph-mon[50703]: from='client.24902 ' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-10T08:41:59.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:59 vm03 ceph-mon[50703]: Health check failed: noup flag(s) set (OSDMAP_FLAGS) 2026-03-10T08:41:59.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:59 vm03 ceph-mon[50703]: from='client.24902 ' entity='client.admin' cmd='[{"prefix": "osd set", "key": "noup"}]': finished 2026-03-10T08:41:59.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:59 vm03 ceph-mon[50703]: osdmap e298: 8 total, 8 up, 8 in 2026-03-10T08:41:59.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:59 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/174788168' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["2", "5", "7"]}]: dispatch 2026-03-10T08:41:59.679 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:41:59 vm03 ceph-mon[50703]: from='client.24902 ' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["2", "5", "7"]}]: dispatch 2026-03-10T08:41:59.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:59 vm06 ceph-mon[54477]: osdmap e297: 8 total, 8 up, 8 in 2026-03-10T08:41:59.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:59 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/174788168' entity='client.admin' cmd=[{"prefix": "osd map", "pool": "test_pool", "object": "foo", "format": "json"}]: dispatch 2026-03-10T08:41:59.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:59 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/174788168' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-10T08:41:59.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:59 vm06 ceph-mon[54477]: from='client.24902 ' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-10T08:41:59.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:59 vm06 ceph-mon[54477]: Health check failed: noup flag(s) set (OSDMAP_FLAGS) 2026-03-10T08:41:59.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:59 vm06 ceph-mon[54477]: from='client.24902 ' entity='client.admin' cmd='[{"prefix": "osd set", "key": "noup"}]': finished 2026-03-10T08:41:59.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:59 vm06 ceph-mon[54477]: osdmap e298: 8 total, 8 up, 8 in 2026-03-10T08:41:59.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:59 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/174788168' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["2", "5", "7"]}]: dispatch 2026-03-10T08:41:59.840 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:41:59 vm06 ceph-mon[54477]: from='client.24902 ' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["2", "5", "7"]}]: dispatch 2026-03-10T08:42:00.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:00 vm03 ceph-mon[57160]: pgmap v408: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 481 MiB used, 159 GiB / 160 GiB avail; 85 KiB/s rd, 85 op/s 2026-03-10T08:42:00.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:00 vm03 ceph-mon[57160]: Health check failed: 3 osds down (OSD_DOWN) 2026-03-10T08:42:00.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:00 vm03 ceph-mon[57160]: from='client.24902 ' entity='client.admin' cmd='[{"prefix": "osd down", "ids": ["2", "5", "7"]}]': finished 2026-03-10T08:42:00.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:00 vm03 ceph-mon[57160]: osdmap e299: 8 total, 5 up, 8 in 2026-03-10T08:42:00.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:00 vm03 ceph-mon[50703]: pgmap v408: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 481 MiB used, 159 GiB / 160 GiB avail; 85 KiB/s rd, 85 op/s 2026-03-10T08:42:00.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:00 vm03 ceph-mon[50703]: Health check failed: 3 osds down (OSD_DOWN) 2026-03-10T08:42:00.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:00 vm03 ceph-mon[50703]: from='client.24902 ' entity='client.admin' cmd='[{"prefix": "osd down", "ids": ["2", "5", "7"]}]': finished 2026-03-10T08:42:00.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:00 vm03 ceph-mon[50703]: osdmap e299: 8 total, 5 up, 8 in 2026-03-10T08:42:00.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:00 vm06 ceph-mon[54477]: pgmap v408: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 481 MiB used, 159 GiB / 160 GiB avail; 85 KiB/s rd, 85 op/s 2026-03-10T08:42:00.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:00 vm06 ceph-mon[54477]: Health check failed: 3 osds down (OSD_DOWN) 2026-03-10T08:42:00.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:00 vm06 ceph-mon[54477]: from='client.24902 ' entity='client.admin' cmd='[{"prefix": "osd down", "ids": ["2", "5", "7"]}]': finished 2026-03-10T08:42:00.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:00 vm06 ceph-mon[54477]: osdmap e299: 8 total, 5 up, 8 in 2026-03-10T08:42:01.178 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 10 08:42:00 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-2[71200]: 2026-03-10T08:42:00.923+0000 7f219086c640 -1 osd.2 299 osdmap NOUP flag is set, waiting for it to clear 2026-03-10T08:42:01.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:01 vm03 ceph-mon[57160]: Monitor daemon marked osd.2 down, but it is still running 2026-03-10T08:42:01.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:01 vm03 ceph-mon[57160]: map e299 wrongly marked me down at e299 2026-03-10T08:42:01.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:01 vm03 ceph-mon[57160]: osd.2 marked itself dead as of e299 2026-03-10T08:42:01.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:01 vm03 ceph-mon[57160]: osdmap e300: 8 total, 5 up, 8 in 2026-03-10T08:42:01.678 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 10 08:42:01 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-2[71200]: 2026-03-10T08:42:01.391+0000 7f218445c640 -1 osd.2 300 osdmap NOUP flag is set, waiting for it to clear 2026-03-10T08:42:01.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:01 vm03 ceph-mon[50703]: Monitor daemon marked osd.2 down, but it is still running 2026-03-10T08:42:01.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:01 vm03 ceph-mon[50703]: map e299 wrongly marked me down at e299 2026-03-10T08:42:01.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:01 vm03 ceph-mon[50703]: osd.2 marked itself dead as of e299 2026-03-10T08:42:01.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:01 vm03 ceph-mon[50703]: osdmap e300: 8 total, 5 up, 8 in 2026-03-10T08:42:01.839 INFO:journalctl@ceph.osd.5.vm06.stdout:Mar 10 08:42:01 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-5[64235]: 2026-03-10T08:42:01.630+0000 7fbfc1029640 -1 osd.5 300 osdmap NOUP flag is set, waiting for it to clear 2026-03-10T08:42:01.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:01 vm06 ceph-mon[54477]: Monitor daemon marked osd.2 down, but it is still running 2026-03-10T08:42:01.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:01 vm06 ceph-mon[54477]: map e299 wrongly marked me down at e299 2026-03-10T08:42:01.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:01 vm06 ceph-mon[54477]: osd.2 marked itself dead as of e299 2026-03-10T08:42:01.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:01 vm06 ceph-mon[54477]: osdmap e300: 8 total, 5 up, 8 in 2026-03-10T08:42:02.839 INFO:journalctl@ceph.osd.5.vm06.stdout:Mar 10 08:42:02 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-5[64235]: 2026-03-10T08:42:02.474+0000 7fbfb4406640 -1 osd.5 301 osdmap NOUP flag is set, waiting for it to clear 2026-03-10T08:42:02.839 INFO:journalctl@ceph.osd.7.vm06.stdout:Mar 10 08:42:02 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-7[74328]: 2026-03-10T08:42:02.686+0000 7f788c5f9640 -1 osd.7 301 osdmap NOUP flag is set, waiting for it to clear 2026-03-10T08:42:02.839 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:42:02 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a[78511]: debug there is no tcmu-runner data available 2026-03-10T08:42:02.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:02 vm06 ceph-mon[54477]: pgmap v411: 196 pgs: 52 stale+active+clean, 32 unknown, 112 active+clean; 455 KiB data, 481 MiB used, 159 GiB / 160 GiB avail 2026-03-10T08:42:02.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:02 vm06 ceph-mon[54477]: Monitor daemon marked osd.5 down, but it is still running 2026-03-10T08:42:02.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:02 vm06 ceph-mon[54477]: map e300 wrongly marked me down at e299 2026-03-10T08:42:02.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:02 vm06 ceph-mon[54477]: osd.5 marked itself dead as of e300 2026-03-10T08:42:02.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:02 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:42:02.840 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:02 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:42:02.840 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:02 vm06 ceph-mon[54477]: osd.7 marked itself dead as of e300 2026-03-10T08:42:02.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:02 vm03 ceph-mon[57160]: pgmap v411: 196 pgs: 52 stale+active+clean, 32 unknown, 112 active+clean; 455 KiB data, 481 MiB used, 159 GiB / 160 GiB avail 2026-03-10T08:42:02.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:02 vm03 ceph-mon[57160]: Monitor daemon marked osd.5 down, but it is still running 2026-03-10T08:42:02.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:02 vm03 ceph-mon[57160]: map e300 wrongly marked me down at e299 2026-03-10T08:42:02.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:02 vm03 ceph-mon[57160]: osd.5 marked itself dead as of e300 2026-03-10T08:42:02.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:02 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:42:02.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:02 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:42:02.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:02 vm03 ceph-mon[57160]: osd.7 marked itself dead as of e300 2026-03-10T08:42:02.928 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 10 08:42:02 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-2[71200]: 2026-03-10T08:42:02.479+0000 7f218445c640 -1 osd.2 301 osdmap NOUP flag is set, waiting for it to clear 2026-03-10T08:42:02.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:02 vm03 ceph-mon[50703]: pgmap v411: 196 pgs: 52 stale+active+clean, 32 unknown, 112 active+clean; 455 KiB data, 481 MiB used, 159 GiB / 160 GiB avail 2026-03-10T08:42:02.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:02 vm03 ceph-mon[50703]: Monitor daemon marked osd.5 down, but it is still running 2026-03-10T08:42:02.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:02 vm03 ceph-mon[50703]: map e300 wrongly marked me down at e299 2026-03-10T08:42:02.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:02 vm03 ceph-mon[50703]: osd.5 marked itself dead as of e300 2026-03-10T08:42:02.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:02 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:42:02.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:02 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:42:02.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:02 vm03 ceph-mon[50703]: osd.7 marked itself dead as of e300 2026-03-10T08:42:02.928 INFO:journalctl@ceph.rgw.foo.a.vm03.stdout:Mar 10 08:42:02 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-rgw-foo-a[80324]: 2026-03-10T08:42:02.499+0000 7f1160845640 -1 rgw watcher librados: RGWWatcher::handle_error cookie 94011383497856 err (110) Connection timed out 2026-03-10T08:42:03.839 INFO:journalctl@ceph.osd.5.vm06.stdout:Mar 10 08:42:03 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-5[64235]: 2026-03-10T08:42:03.481+0000 7fbfbc640640 -1 osd.5 302 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T08:42:03.839 INFO:journalctl@ceph.osd.7.vm06.stdout:Mar 10 08:42:03 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-7[74328]: 2026-03-10T08:42:03.490+0000 7f7887c22640 -1 osd.7 302 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T08:42:03.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:03 vm06 ceph-mon[54477]: Monitor daemon marked osd.7 down, but it is still running 2026-03-10T08:42:03.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:03 vm06 ceph-mon[54477]: map e300 wrongly marked me down at e299 2026-03-10T08:42:03.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:03 vm06 ceph-mon[54477]: osdmap e301: 8 total, 5 up, 8 in 2026-03-10T08:42:03.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:03 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/174788168' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:42:03.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:03 vm06 ceph-mon[54477]: from='client.24902 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:42:03.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:03 vm03 ceph-mon[57160]: Monitor daemon marked osd.7 down, but it is still running 2026-03-10T08:42:03.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:03 vm03 ceph-mon[57160]: map e300 wrongly marked me down at e299 2026-03-10T08:42:03.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:03 vm03 ceph-mon[57160]: osdmap e301: 8 total, 5 up, 8 in 2026-03-10T08:42:03.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:03 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/174788168' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:42:03.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:03 vm03 ceph-mon[57160]: from='client.24902 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:42:03.928 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 10 08:42:03 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-2[71200]: 2026-03-10T08:42:03.492+0000 7f218c696640 -1 osd.2 302 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T08:42:03.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:03 vm03 ceph-mon[50703]: Monitor daemon marked osd.7 down, but it is still running 2026-03-10T08:42:03.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:03 vm03 ceph-mon[50703]: map e300 wrongly marked me down at e299 2026-03-10T08:42:03.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:03 vm03 ceph-mon[50703]: osdmap e301: 8 total, 5 up, 8 in 2026-03-10T08:42:03.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:03 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/174788168' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:42:03.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:03 vm03 ceph-mon[50703]: from='client.24902 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:42:04.829 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:04 vm03 ceph-mon[57160]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:42:04.829 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:04 vm03 ceph-mon[57160]: pgmap v414: 196 pgs: 24 active+undersized, 5 undersized+degraded+peered+wait, 21 active+undersized+degraded+wait, 2 stale+active+clean, 1 unknown, 25 undersized+peered+wait, 53 active+undersized+wait, 9 active+undersized+degraded, 56 active+clean; 455 KiB data, 485 MiB used, 159 GiB / 160 GiB avail; 172/597 objects degraded (28.811%) 2026-03-10T08:42:04.829 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:04 vm03 ceph-mon[57160]: Health check failed: Reduced data availability: 1 pg inactive (PG_AVAILABILITY) 2026-03-10T08:42:04.829 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:04 vm03 ceph-mon[57160]: Health check failed: Degraded data redundancy: 172/597 objects degraded (28.811%), 35 pgs degraded (PG_DEGRADED) 2026-03-10T08:42:04.829 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:04 vm03 ceph-mon[57160]: Health check cleared: OSDMAP_FLAGS (was: noup flag(s) set) 2026-03-10T08:42:04.829 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:04 vm03 ceph-mon[57160]: from='client.24902 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:42:04.829 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:04 vm03 ceph-mon[57160]: osdmap e302: 8 total, 5 up, 8 in 2026-03-10T08:42:04.829 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:04 vm03 ceph-mon[50703]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:42:04.829 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:04 vm03 ceph-mon[50703]: pgmap v414: 196 pgs: 24 active+undersized, 5 undersized+degraded+peered+wait, 21 active+undersized+degraded+wait, 2 stale+active+clean, 1 unknown, 25 undersized+peered+wait, 53 active+undersized+wait, 9 active+undersized+degraded, 56 active+clean; 455 KiB data, 485 MiB used, 159 GiB / 160 GiB avail; 172/597 objects degraded (28.811%) 2026-03-10T08:42:04.829 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:04 vm03 ceph-mon[50703]: Health check failed: Reduced data availability: 1 pg inactive (PG_AVAILABILITY) 2026-03-10T08:42:04.829 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:04 vm03 ceph-mon[50703]: Health check failed: Degraded data redundancy: 172/597 objects degraded (28.811%), 35 pgs degraded (PG_DEGRADED) 2026-03-10T08:42:04.829 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:04 vm03 ceph-mon[50703]: Health check cleared: OSDMAP_FLAGS (was: noup flag(s) set) 2026-03-10T08:42:04.830 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:04 vm03 ceph-mon[50703]: from='client.24902 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:42:04.830 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:04 vm03 ceph-mon[50703]: osdmap e302: 8 total, 5 up, 8 in 2026-03-10T08:42:04.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:04 vm06 ceph-mon[54477]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:42:04.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:04 vm06 ceph-mon[54477]: pgmap v414: 196 pgs: 24 active+undersized, 5 undersized+degraded+peered+wait, 21 active+undersized+degraded+wait, 2 stale+active+clean, 1 unknown, 25 undersized+peered+wait, 53 active+undersized+wait, 9 active+undersized+degraded, 56 active+clean; 455 KiB data, 485 MiB used, 159 GiB / 160 GiB avail; 172/597 objects degraded (28.811%) 2026-03-10T08:42:04.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:04 vm06 ceph-mon[54477]: Health check failed: Reduced data availability: 1 pg inactive (PG_AVAILABILITY) 2026-03-10T08:42:04.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:04 vm06 ceph-mon[54477]: Health check failed: Degraded data redundancy: 172/597 objects degraded (28.811%), 35 pgs degraded (PG_DEGRADED) 2026-03-10T08:42:04.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:04 vm06 ceph-mon[54477]: Health check cleared: OSDMAP_FLAGS (was: noup flag(s) set) 2026-03-10T08:42:04.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:04 vm06 ceph-mon[54477]: from='client.24902 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:42:04.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:04 vm06 ceph-mon[54477]: osdmap e302: 8 total, 5 up, 8 in 2026-03-10T08:42:05.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:05 vm06 ceph-mon[54477]: Health check cleared: OSD_DOWN (was: 3 osds down) 2026-03-10T08:42:05.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:05 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T08:42:05.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:05 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T08:42:05.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:05 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T08:42:05.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:05 vm06 ceph-mon[54477]: osd.5 v1:192.168.123.106:6804/74091533 boot 2026-03-10T08:42:05.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:05 vm06 ceph-mon[54477]: osd.7 v1:192.168.123.106:6812/1491932823 boot 2026-03-10T08:42:05.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:05 vm06 ceph-mon[54477]: osd.2 v1:192.168.123.103:6809/1710778110 boot 2026-03-10T08:42:05.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:05 vm06 ceph-mon[54477]: osdmap e303: 8 total, 8 up, 8 in 2026-03-10T08:42:05.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:05 vm06 ceph-mon[54477]: pgmap v417: 196 pgs: 24 active+undersized, 5 undersized+degraded+peered+wait, 21 active+undersized+degraded+wait, 23 stale+active+clean, 1 unknown, 25 undersized+peered+wait, 53 active+undersized+wait, 9 active+undersized+degraded, 35 active+clean; 455 KiB data, 485 MiB used, 159 GiB / 160 GiB avail; 172/597 objects degraded (28.811%) 2026-03-10T08:42:05.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:05 vm06 ceph-mon[54477]: osdmap e304: 8 total, 8 up, 8 in 2026-03-10T08:42:05.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:05 vm03 ceph-mon[57160]: Health check cleared: OSD_DOWN (was: 3 osds down) 2026-03-10T08:42:05.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:05 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T08:42:05.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:05 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T08:42:05.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:05 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T08:42:05.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:05 vm03 ceph-mon[57160]: osd.5 v1:192.168.123.106:6804/74091533 boot 2026-03-10T08:42:05.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:05 vm03 ceph-mon[57160]: osd.7 v1:192.168.123.106:6812/1491932823 boot 2026-03-10T08:42:05.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:05 vm03 ceph-mon[57160]: osd.2 v1:192.168.123.103:6809/1710778110 boot 2026-03-10T08:42:05.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:05 vm03 ceph-mon[57160]: osdmap e303: 8 total, 8 up, 8 in 2026-03-10T08:42:05.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:05 vm03 ceph-mon[57160]: pgmap v417: 196 pgs: 24 active+undersized, 5 undersized+degraded+peered+wait, 21 active+undersized+degraded+wait, 23 stale+active+clean, 1 unknown, 25 undersized+peered+wait, 53 active+undersized+wait, 9 active+undersized+degraded, 35 active+clean; 455 KiB data, 485 MiB used, 159 GiB / 160 GiB avail; 172/597 objects degraded (28.811%) 2026-03-10T08:42:05.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:05 vm03 ceph-mon[57160]: osdmap e304: 8 total, 8 up, 8 in 2026-03-10T08:42:05.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:05 vm03 ceph-mon[50703]: Health check cleared: OSD_DOWN (was: 3 osds down) 2026-03-10T08:42:05.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:05 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T08:42:05.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:05 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T08:42:05.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:05 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T08:42:05.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:05 vm03 ceph-mon[50703]: osd.5 v1:192.168.123.106:6804/74091533 boot 2026-03-10T08:42:05.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:05 vm03 ceph-mon[50703]: osd.7 v1:192.168.123.106:6812/1491932823 boot 2026-03-10T08:42:05.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:05 vm03 ceph-mon[50703]: osd.2 v1:192.168.123.103:6809/1710778110 boot 2026-03-10T08:42:05.929 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:05 vm03 ceph-mon[50703]: osdmap e303: 8 total, 8 up, 8 in 2026-03-10T08:42:05.929 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:05 vm03 ceph-mon[50703]: pgmap v417: 196 pgs: 24 active+undersized, 5 undersized+degraded+peered+wait, 21 active+undersized+degraded+wait, 23 stale+active+clean, 1 unknown, 25 undersized+peered+wait, 53 active+undersized+wait, 9 active+undersized+degraded, 35 active+clean; 455 KiB data, 485 MiB used, 159 GiB / 160 GiB avail; 172/597 objects degraded (28.811%) 2026-03-10T08:42:05.929 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:05 vm03 ceph-mon[50703]: osdmap e304: 8 total, 8 up, 8 in 2026-03-10T08:42:06.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:06 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/174788168' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:42:06.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:06 vm03 ceph-mon[57160]: from='client.24902 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:42:06.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:06 vm03 ceph-mon[57160]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:42:06.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:06 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/174788168' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:42:06.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:06 vm03 ceph-mon[50703]: from='client.24902 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:42:06.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:06 vm03 ceph-mon[50703]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:42:07.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:06 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/174788168' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:42:07.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:06 vm06 ceph-mon[54477]: from='client.24902 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:42:07.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:06 vm06 ceph-mon[54477]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:42:07.628 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_read_wait_for_complete_and_cb PASSED [ 78%] 2026-03-10T08:42:07.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:07 vm03 ceph-mon[57160]: from='client.24902 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:42:07.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:07 vm03 ceph-mon[57160]: osdmap e305: 8 total, 8 up, 8 in 2026-03-10T08:42:07.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:07 vm03 ceph-mon[57160]: pgmap v420: 196 pgs: 196 active+clean; 455 KiB data, 486 MiB used, 159 GiB / 160 GiB avail; 255 B/s wr, 0 op/s; 0 B/s, 0 objects/s recovering 2026-03-10T08:42:07.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:07 vm03 ceph-mon[50703]: from='client.24902 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:42:07.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:07 vm03 ceph-mon[50703]: osdmap e305: 8 total, 8 up, 8 in 2026-03-10T08:42:07.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:07 vm03 ceph-mon[50703]: pgmap v420: 196 pgs: 196 active+clean; 455 KiB data, 486 MiB used, 159 GiB / 160 GiB avail; 255 B/s wr, 0 op/s; 0 B/s, 0 objects/s recovering 2026-03-10T08:42:08.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:07 vm06 ceph-mon[54477]: from='client.24902 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:42:08.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:07 vm06 ceph-mon[54477]: osdmap e305: 8 total, 8 up, 8 in 2026-03-10T08:42:08.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:07 vm06 ceph-mon[54477]: pgmap v420: 196 pgs: 196 active+clean; 455 KiB data, 486 MiB used, 159 GiB / 160 GiB avail; 255 B/s wr, 0 op/s; 0 B/s, 0 objects/s recovering 2026-03-10T08:42:08.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:08 vm03 ceph-mon[57160]: osdmap e306: 8 total, 8 up, 8 in 2026-03-10T08:42:08.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:08 vm03 ceph-mon[57160]: Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive) 2026-03-10T08:42:08.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:08 vm03 ceph-mon[57160]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 172/597 objects degraded (28.811%), 35 pgs degraded) 2026-03-10T08:42:08.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:08 vm03 ceph-mon[57160]: osdmap e307: 8 total, 8 up, 8 in 2026-03-10T08:42:08.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:08 vm03 ceph-mon[50703]: osdmap e306: 8 total, 8 up, 8 in 2026-03-10T08:42:08.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:08 vm03 ceph-mon[50703]: Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive) 2026-03-10T08:42:08.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:08 vm03 ceph-mon[50703]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 172/597 objects degraded (28.811%), 35 pgs degraded) 2026-03-10T08:42:08.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:08 vm03 ceph-mon[50703]: osdmap e307: 8 total, 8 up, 8 in 2026-03-10T08:42:09.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:08 vm06 ceph-mon[54477]: osdmap e306: 8 total, 8 up, 8 in 2026-03-10T08:42:09.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:08 vm06 ceph-mon[54477]: Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive) 2026-03-10T08:42:09.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:08 vm06 ceph-mon[54477]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 172/597 objects degraded (28.811%), 35 pgs degraded) 2026-03-10T08:42:09.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:08 vm06 ceph-mon[54477]: osdmap e307: 8 total, 8 up, 8 in 2026-03-10T08:42:09.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:09 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/3921516396' entity='client.admin' cmd=[{"prefix": "osd map", "pool": "test_pool", "object": "bar", "format": "json"}]: dispatch 2026-03-10T08:42:09.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:09 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/3921516396' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-10T08:42:09.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:09 vm03 ceph-mon[57160]: pgmap v423: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 486 MiB used, 159 GiB / 160 GiB avail; 0 B/s, 0 objects/s recovering 2026-03-10T08:42:09.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:09 vm03 ceph-mon[57160]: Health check failed: noup flag(s) set (OSDMAP_FLAGS) 2026-03-10T08:42:09.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:09 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/3921516396' entity='client.admin' cmd='[{"prefix": "osd set", "key": "noup"}]': finished 2026-03-10T08:42:09.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:09 vm03 ceph-mon[57160]: osdmap e308: 8 total, 8 up, 8 in 2026-03-10T08:42:09.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:09 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/3921516396' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["1", "7", "2"]}]: dispatch 2026-03-10T08:42:09.928 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:42:09 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: ::ffff:192.168.123.106 - - [10/Mar/2026:08:42:09] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T08:42:09.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:09 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mon-a[50699]: 2026-03-10T08:42:09.661+0000 7f8dbdb2b640 -1 mon.a@0(leader).osd e308 definitely_dead 0 2026-03-10T08:42:09.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:09 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/3921516396' entity='client.admin' cmd=[{"prefix": "osd map", "pool": "test_pool", "object": "bar", "format": "json"}]: dispatch 2026-03-10T08:42:09.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:09 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/3921516396' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-10T08:42:09.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:09 vm03 ceph-mon[50703]: pgmap v423: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 486 MiB used, 159 GiB / 160 GiB avail; 0 B/s, 0 objects/s recovering 2026-03-10T08:42:09.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:09 vm03 ceph-mon[50703]: Health check failed: noup flag(s) set (OSDMAP_FLAGS) 2026-03-10T08:42:09.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:09 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/3921516396' entity='client.admin' cmd='[{"prefix": "osd set", "key": "noup"}]': finished 2026-03-10T08:42:09.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:09 vm03 ceph-mon[50703]: osdmap e308: 8 total, 8 up, 8 in 2026-03-10T08:42:09.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:09 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/3921516396' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["1", "7", "2"]}]: dispatch 2026-03-10T08:42:10.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:09 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/3921516396' entity='client.admin' cmd=[{"prefix": "osd map", "pool": "test_pool", "object": "bar", "format": "json"}]: dispatch 2026-03-10T08:42:10.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:09 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/3921516396' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-10T08:42:10.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:09 vm06 ceph-mon[54477]: pgmap v423: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 486 MiB used, 159 GiB / 160 GiB avail; 0 B/s, 0 objects/s recovering 2026-03-10T08:42:10.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:09 vm06 ceph-mon[54477]: Health check failed: noup flag(s) set (OSDMAP_FLAGS) 2026-03-10T08:42:10.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:09 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/3921516396' entity='client.admin' cmd='[{"prefix": "osd set", "key": "noup"}]': finished 2026-03-10T08:42:10.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:09 vm06 ceph-mon[54477]: osdmap e308: 8 total, 8 up, 8 in 2026-03-10T08:42:10.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:09 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/3921516396' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["1", "7", "2"]}]: dispatch 2026-03-10T08:42:11.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:10 vm06 ceph-mon[54477]: Health check failed: 3 osds down (OSD_DOWN) 2026-03-10T08:42:11.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:10 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/3921516396' entity='client.admin' cmd='[{"prefix": "osd down", "ids": ["1", "7", "2"]}]': finished 2026-03-10T08:42:11.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:10 vm06 ceph-mon[54477]: osdmap e309: 8 total, 5 up, 8 in 2026-03-10T08:42:11.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:10 vm03 ceph-mon[57160]: Health check failed: 3 osds down (OSD_DOWN) 2026-03-10T08:42:11.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:10 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/3921516396' entity='client.admin' cmd='[{"prefix": "osd down", "ids": ["1", "7", "2"]}]': finished 2026-03-10T08:42:11.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:10 vm03 ceph-mon[57160]: osdmap e309: 8 total, 5 up, 8 in 2026-03-10T08:42:11.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:10 vm03 ceph-mon[50703]: Health check failed: 3 osds down (OSD_DOWN) 2026-03-10T08:42:11.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:10 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/3921516396' entity='client.admin' cmd='[{"prefix": "osd down", "ids": ["1", "7", "2"]}]': finished 2026-03-10T08:42:11.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:10 vm03 ceph-mon[50703]: osdmap e309: 8 total, 5 up, 8 in 2026-03-10T08:42:11.653 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 10 08:42:11 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-1[65947]: 2026-03-10T08:42:11.296+0000 7fd9bbdf7640 -1 osd.1 309 osdmap NOUP flag is set, waiting for it to clear 2026-03-10T08:42:11.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:11 vm03 ceph-mon[57160]: osd.1 marked itself dead as of e309 2026-03-10T08:42:11.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:11 vm03 ceph-mon[57160]: pgmap v426: 196 pgs: 51 stale+active+clean, 32 unknown, 113 active+clean; 455 KiB data, 486 MiB used, 159 GiB / 160 GiB avail 2026-03-10T08:42:11.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:11 vm03 ceph-mon[57160]: osdmap e310: 8 total, 5 up, 8 in 2026-03-10T08:42:11.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:11 vm03 ceph-mon[50703]: osd.1 marked itself dead as of e309 2026-03-10T08:42:11.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:11 vm03 ceph-mon[50703]: pgmap v426: 196 pgs: 51 stale+active+clean, 32 unknown, 113 active+clean; 455 KiB data, 486 MiB used, 159 GiB / 160 GiB avail 2026-03-10T08:42:11.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:11 vm03 ceph-mon[50703]: osdmap e310: 8 total, 5 up, 8 in 2026-03-10T08:42:11.928 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 10 08:42:11 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-1[65947]: 2026-03-10T08:42:11.650+0000 7fd9ae9d3640 -1 osd.1 310 osdmap NOUP flag is set, waiting for it to clear 2026-03-10T08:42:12.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:11 vm06 ceph-mon[54477]: osd.1 marked itself dead as of e309 2026-03-10T08:42:12.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:11 vm06 ceph-mon[54477]: pgmap v426: 196 pgs: 51 stale+active+clean, 32 unknown, 113 active+clean; 455 KiB data, 486 MiB used, 159 GiB / 160 GiB avail 2026-03-10T08:42:12.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:11 vm06 ceph-mon[54477]: osdmap e310: 8 total, 5 up, 8 in 2026-03-10T08:42:12.839 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:42:12 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a[78511]: debug there is no tcmu-runner data available 2026-03-10T08:42:12.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:12 vm06 ceph-mon[54477]: Monitor daemon marked osd.1 down, but it is still running 2026-03-10T08:42:12.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:12 vm06 ceph-mon[54477]: map e309 wrongly marked me down at e309 2026-03-10T08:42:13.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:12 vm03 ceph-mon[57160]: Monitor daemon marked osd.1 down, but it is still running 2026-03-10T08:42:13.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:12 vm03 ceph-mon[57160]: map e309 wrongly marked me down at e309 2026-03-10T08:42:13.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:12 vm03 ceph-mon[50703]: Monitor daemon marked osd.1 down, but it is still running 2026-03-10T08:42:13.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:12 vm03 ceph-mon[50703]: map e309 wrongly marked me down at e309 2026-03-10T08:42:14.089 INFO:journalctl@ceph.osd.7.vm06.stdout:Mar 10 08:42:13 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-7[74328]: 2026-03-10T08:42:13.675+0000 7f788ce0c640 -1 osd.7 310 osdmap NOUP flag is set, waiting for it to clear 2026-03-10T08:42:14.089 INFO:journalctl@ceph.osd.7.vm06.stdout:Mar 10 08:42:13 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-7[74328]: 2026-03-10T08:42:13.761+0000 7f7887c22640 -1 osd.7 311 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T08:42:14.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:13 vm06 ceph-mon[54477]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:42:14.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:13 vm06 ceph-mon[54477]: Monitor daemon marked osd.2 down, but it is still running 2026-03-10T08:42:14.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:13 vm06 ceph-mon[54477]: map e310 wrongly marked me down at e309 2026-03-10T08:42:14.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:13 vm06 ceph-mon[54477]: osd.2 marked itself dead as of e310 2026-03-10T08:42:14.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:13 vm06 ceph-mon[54477]: pgmap v428: 196 pgs: 34 active+undersized, 10 undersized+degraded+peered+wait, 13 active+undersized+degraded+wait, 1 unknown, 33 active+undersized+wait, 24 undersized+peered+wait, 14 active+undersized+degraded, 67 active+clean; 455 KiB data, 486 MiB used, 159 GiB / 160 GiB avail; 213/597 objects degraded (35.678%) 2026-03-10T08:42:14.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:13 vm06 ceph-mon[54477]: Monitor daemon marked osd.7 down, but it is still running 2026-03-10T08:42:14.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:13 vm06 ceph-mon[54477]: map e310 wrongly marked me down at e309 2026-03-10T08:42:14.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:13 vm06 ceph-mon[54477]: osd.7 marked itself dead as of e310 2026-03-10T08:42:14.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:13 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/3921516396' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:42:14.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:13 vm03 ceph-mon[57160]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:42:14.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:13 vm03 ceph-mon[57160]: Monitor daemon marked osd.2 down, but it is still running 2026-03-10T08:42:14.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:13 vm03 ceph-mon[57160]: map e310 wrongly marked me down at e309 2026-03-10T08:42:14.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:13 vm03 ceph-mon[57160]: osd.2 marked itself dead as of e310 2026-03-10T08:42:14.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:13 vm03 ceph-mon[57160]: pgmap v428: 196 pgs: 34 active+undersized, 10 undersized+degraded+peered+wait, 13 active+undersized+degraded+wait, 1 unknown, 33 active+undersized+wait, 24 undersized+peered+wait, 14 active+undersized+degraded, 67 active+clean; 455 KiB data, 486 MiB used, 159 GiB / 160 GiB avail; 213/597 objects degraded (35.678%) 2026-03-10T08:42:14.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:13 vm03 ceph-mon[57160]: Monitor daemon marked osd.7 down, but it is still running 2026-03-10T08:42:14.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:13 vm03 ceph-mon[57160]: map e310 wrongly marked me down at e309 2026-03-10T08:42:14.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:13 vm03 ceph-mon[57160]: osd.7 marked itself dead as of e310 2026-03-10T08:42:14.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:13 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/3921516396' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:42:14.178 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 10 08:42:13 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-2[71200]: 2026-03-10T08:42:13.825+0000 7f218c696640 -1 osd.2 311 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T08:42:14.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:13 vm03 ceph-mon[50703]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:42:14.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:13 vm03 ceph-mon[50703]: Monitor daemon marked osd.2 down, but it is still running 2026-03-10T08:42:14.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:13 vm03 ceph-mon[50703]: map e310 wrongly marked me down at e309 2026-03-10T08:42:14.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:13 vm03 ceph-mon[50703]: osd.2 marked itself dead as of e310 2026-03-10T08:42:14.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:13 vm03 ceph-mon[50703]: pgmap v428: 196 pgs: 34 active+undersized, 10 undersized+degraded+peered+wait, 13 active+undersized+degraded+wait, 1 unknown, 33 active+undersized+wait, 24 undersized+peered+wait, 14 active+undersized+degraded, 67 active+clean; 455 KiB data, 486 MiB used, 159 GiB / 160 GiB avail; 213/597 objects degraded (35.678%) 2026-03-10T08:42:14.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:13 vm03 ceph-mon[50703]: Monitor daemon marked osd.7 down, but it is still running 2026-03-10T08:42:14.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:13 vm03 ceph-mon[50703]: map e310 wrongly marked me down at e309 2026-03-10T08:42:14.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:13 vm03 ceph-mon[50703]: osd.7 marked itself dead as of e310 2026-03-10T08:42:14.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:13 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/3921516396' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:42:14.178 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 10 08:42:13 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-1[65947]: 2026-03-10T08:42:13.751+0000 7fd9b6c0d640 -1 osd.1 311 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T08:42:15.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:14 vm06 ceph-mon[54477]: Health check failed: Degraded data redundancy: 213/597 objects degraded (35.678%), 37 pgs degraded (PG_DEGRADED) 2026-03-10T08:42:15.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:14 vm06 ceph-mon[54477]: Health check cleared: OSDMAP_FLAGS (was: noup flag(s) set) 2026-03-10T08:42:15.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:14 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/3921516396' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:42:15.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:14 vm06 ceph-mon[54477]: osdmap e311: 8 total, 5 up, 8 in 2026-03-10T08:42:15.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:14 vm03 ceph-mon[57160]: Health check failed: Degraded data redundancy: 213/597 objects degraded (35.678%), 37 pgs degraded (PG_DEGRADED) 2026-03-10T08:42:15.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:14 vm03 ceph-mon[57160]: Health check cleared: OSDMAP_FLAGS (was: noup flag(s) set) 2026-03-10T08:42:15.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:14 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/3921516396' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:42:15.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:14 vm03 ceph-mon[57160]: osdmap e311: 8 total, 5 up, 8 in 2026-03-10T08:42:15.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:14 vm03 ceph-mon[50703]: Health check failed: Degraded data redundancy: 213/597 objects degraded (35.678%), 37 pgs degraded (PG_DEGRADED) 2026-03-10T08:42:15.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:14 vm03 ceph-mon[50703]: Health check cleared: OSDMAP_FLAGS (was: noup flag(s) set) 2026-03-10T08:42:15.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:14 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/3921516396' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:42:15.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:14 vm03 ceph-mon[50703]: osdmap e311: 8 total, 5 up, 8 in 2026-03-10T08:42:16.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:15 vm06 ceph-mon[54477]: Health check cleared: OSD_DOWN (was: 3 osds down) 2026-03-10T08:42:16.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:15 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T08:42:16.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:15 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T08:42:16.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:15 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T08:42:16.090 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:15 vm06 ceph-mon[54477]: osd.1 v1:192.168.123.103:6805/129267279 boot 2026-03-10T08:42:16.090 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:15 vm06 ceph-mon[54477]: osd.7 v1:192.168.123.106:6812/1491932823 boot 2026-03-10T08:42:16.090 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:15 vm06 ceph-mon[54477]: osd.2 v1:192.168.123.103:6809/1710778110 boot 2026-03-10T08:42:16.090 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:15 vm06 ceph-mon[54477]: osdmap e312: 8 total, 8 up, 8 in 2026-03-10T08:42:16.090 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:15 vm06 ceph-mon[54477]: pgmap v431: 196 pgs: 36 stale+active+clean, 34 active+undersized, 10 undersized+degraded+peered+wait, 13 active+undersized+degraded+wait, 1 unknown, 33 active+undersized+wait, 24 undersized+peered+wait, 14 active+undersized+degraded, 31 active+clean; 455 KiB data, 486 MiB used, 159 GiB / 160 GiB avail; 213/597 objects degraded (35.678%) 2026-03-10T08:42:16.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:15 vm03 ceph-mon[57160]: Health check cleared: OSD_DOWN (was: 3 osds down) 2026-03-10T08:42:16.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:15 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T08:42:16.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:15 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T08:42:16.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:15 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T08:42:16.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:15 vm03 ceph-mon[57160]: osd.1 v1:192.168.123.103:6805/129267279 boot 2026-03-10T08:42:16.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:15 vm03 ceph-mon[57160]: osd.7 v1:192.168.123.106:6812/1491932823 boot 2026-03-10T08:42:16.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:15 vm03 ceph-mon[57160]: osd.2 v1:192.168.123.103:6809/1710778110 boot 2026-03-10T08:42:16.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:15 vm03 ceph-mon[57160]: osdmap e312: 8 total, 8 up, 8 in 2026-03-10T08:42:16.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:15 vm03 ceph-mon[57160]: pgmap v431: 196 pgs: 36 stale+active+clean, 34 active+undersized, 10 undersized+degraded+peered+wait, 13 active+undersized+degraded+wait, 1 unknown, 33 active+undersized+wait, 24 undersized+peered+wait, 14 active+undersized+degraded, 31 active+clean; 455 KiB data, 486 MiB used, 159 GiB / 160 GiB avail; 213/597 objects degraded (35.678%) 2026-03-10T08:42:16.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:15 vm03 ceph-mon[50703]: Health check cleared: OSD_DOWN (was: 3 osds down) 2026-03-10T08:42:16.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:15 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T08:42:16.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:15 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T08:42:16.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:15 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T08:42:16.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:15 vm03 ceph-mon[50703]: osd.1 v1:192.168.123.103:6805/129267279 boot 2026-03-10T08:42:16.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:15 vm03 ceph-mon[50703]: osd.7 v1:192.168.123.106:6812/1491932823 boot 2026-03-10T08:42:16.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:15 vm03 ceph-mon[50703]: osd.2 v1:192.168.123.103:6809/1710778110 boot 2026-03-10T08:42:16.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:15 vm03 ceph-mon[50703]: osdmap e312: 8 total, 8 up, 8 in 2026-03-10T08:42:16.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:15 vm03 ceph-mon[50703]: pgmap v431: 196 pgs: 36 stale+active+clean, 34 active+undersized, 10 undersized+degraded+peered+wait, 13 active+undersized+degraded+wait, 1 unknown, 33 active+undersized+wait, 24 undersized+peered+wait, 14 active+undersized+degraded, 31 active+clean; 455 KiB data, 486 MiB used, 159 GiB / 160 GiB avail; 213/597 objects degraded (35.678%) 2026-03-10T08:42:17.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:16 vm06 ceph-mon[54477]: osdmap e313: 8 total, 8 up, 8 in 2026-03-10T08:42:17.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:16 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:42:17.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:16 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:42:17.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:16 vm03 ceph-mon[57160]: osdmap e313: 8 total, 8 up, 8 in 2026-03-10T08:42:17.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:16 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:42:17.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:16 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:42:17.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:16 vm03 ceph-mon[50703]: osdmap e313: 8 total, 8 up, 8 in 2026-03-10T08:42:17.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:16 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:42:17.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:16 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:42:18.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:17 vm06 ceph-mon[54477]: pgmap v433: 196 pgs: 196 active+clean; 455 KiB data, 491 MiB used, 159 GiB / 160 GiB avail 2026-03-10T08:42:18.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:17 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/3921516396' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:42:18.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:17 vm03 ceph-mon[57160]: pgmap v433: 196 pgs: 196 active+clean; 455 KiB data, 491 MiB used, 159 GiB / 160 GiB avail 2026-03-10T08:42:18.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:17 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/3921516396' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:42:18.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:17 vm03 ceph-mon[50703]: pgmap v433: 196 pgs: 196 active+clean; 455 KiB data, 491 MiB used, 159 GiB / 160 GiB avail 2026-03-10T08:42:18.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:17 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/3921516396' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:42:18.847 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_read_wait_for_complete_and_cb_error PASSED [ 79%] 2026-03-10T08:42:19.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:18 vm03 ceph-mon[57160]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 213/597 objects degraded (35.678%), 37 pgs degraded) 2026-03-10T08:42:19.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:18 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/3921516396' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:42:19.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:18 vm03 ceph-mon[57160]: osdmap e314: 8 total, 8 up, 8 in 2026-03-10T08:42:19.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:18 vm03 ceph-mon[50703]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 213/597 objects degraded (35.678%), 37 pgs degraded) 2026-03-10T08:42:19.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:18 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/3921516396' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:42:19.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:18 vm03 ceph-mon[50703]: osdmap e314: 8 total, 8 up, 8 in 2026-03-10T08:42:19.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:18 vm06 ceph-mon[54477]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 213/597 objects degraded (35.678%), 37 pgs degraded) 2026-03-10T08:42:19.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:18 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/3921516396' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:42:19.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:18 vm06 ceph-mon[54477]: osdmap e314: 8 total, 8 up, 8 in 2026-03-10T08:42:19.882 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:42:19 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: ::ffff:192.168.123.106 - - [10/Mar/2026:08:42:19] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T08:42:20.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:19 vm03 ceph-mon[57160]: osdmap e315: 8 total, 8 up, 8 in 2026-03-10T08:42:20.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:19 vm03 ceph-mon[57160]: pgmap v436: 164 pgs: 164 active+clean; 455 KiB data, 491 MiB used, 159 GiB / 160 GiB avail; 3.3 KiB/s rd, 3 op/s 2026-03-10T08:42:20.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:19 vm03 ceph-mon[50703]: osdmap e315: 8 total, 8 up, 8 in 2026-03-10T08:42:20.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:19 vm03 ceph-mon[50703]: pgmap v436: 164 pgs: 164 active+clean; 455 KiB data, 491 MiB used, 159 GiB / 160 GiB avail; 3.3 KiB/s rd, 3 op/s 2026-03-10T08:42:20.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:19 vm06 ceph-mon[54477]: osdmap e315: 8 total, 8 up, 8 in 2026-03-10T08:42:20.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:19 vm06 ceph-mon[54477]: pgmap v436: 164 pgs: 164 active+clean; 455 KiB data, 491 MiB used, 159 GiB / 160 GiB avail; 3.3 KiB/s rd, 3 op/s 2026-03-10T08:42:21.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:20 vm03 ceph-mon[57160]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:42:21.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:20 vm03 ceph-mon[57160]: osdmap e316: 8 total, 8 up, 8 in 2026-03-10T08:42:21.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:20 vm03 ceph-mon[50703]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:42:21.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:20 vm03 ceph-mon[50703]: osdmap e316: 8 total, 8 up, 8 in 2026-03-10T08:42:21.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:20 vm06 ceph-mon[54477]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:42:21.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:20 vm06 ceph-mon[54477]: osdmap e316: 8 total, 8 up, 8 in 2026-03-10T08:42:22.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:21 vm03 ceph-mon[57160]: osdmap e317: 8 total, 8 up, 8 in 2026-03-10T08:42:22.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:21 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/3400642964' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:42:22.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:21 vm03 ceph-mon[57160]: pgmap v439: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 491 MiB used, 159 GiB / 160 GiB avail 2026-03-10T08:42:22.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:21 vm03 ceph-mon[50703]: osdmap e317: 8 total, 8 up, 8 in 2026-03-10T08:42:22.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:21 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/3400642964' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:42:22.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:21 vm03 ceph-mon[50703]: pgmap v439: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 491 MiB used, 159 GiB / 160 GiB avail 2026-03-10T08:42:22.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:21 vm06 ceph-mon[54477]: osdmap e317: 8 total, 8 up, 8 in 2026-03-10T08:42:22.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:21 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/3400642964' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:42:22.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:21 vm06 ceph-mon[54477]: pgmap v439: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 491 MiB used, 159 GiB / 160 GiB avail 2026-03-10T08:42:22.839 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:42:22 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a[78511]: debug there is no tcmu-runner data available 2026-03-10T08:42:22.908 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_lock PASSED [ 80%] 2026-03-10T08:42:23.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:22 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/3400642964' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:42:23.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:22 vm03 ceph-mon[57160]: osdmap e318: 8 total, 8 up, 8 in 2026-03-10T08:42:23.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:22 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/3400642964' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:42:23.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:22 vm03 ceph-mon[50703]: osdmap e318: 8 total, 8 up, 8 in 2026-03-10T08:42:23.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:22 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/3400642964' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:42:23.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:22 vm06 ceph-mon[54477]: osdmap e318: 8 total, 8 up, 8 in 2026-03-10T08:42:24.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:23 vm03 ceph-mon[57160]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:42:24.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:23 vm03 ceph-mon[57160]: osdmap e319: 8 total, 8 up, 8 in 2026-03-10T08:42:24.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:23 vm03 ceph-mon[57160]: pgmap v442: 164 pgs: 164 active+clean; 455 KiB data, 491 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:42:24.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:23 vm03 ceph-mon[50703]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:42:24.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:23 vm03 ceph-mon[50703]: osdmap e319: 8 total, 8 up, 8 in 2026-03-10T08:42:24.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:23 vm03 ceph-mon[50703]: pgmap v442: 164 pgs: 164 active+clean; 455 KiB data, 491 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:42:24.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:23 vm06 ceph-mon[54477]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:42:24.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:23 vm06 ceph-mon[54477]: osdmap e319: 8 total, 8 up, 8 in 2026-03-10T08:42:24.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:23 vm06 ceph-mon[54477]: pgmap v442: 164 pgs: 164 active+clean; 455 KiB data, 491 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:42:25.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:24 vm06 ceph-mon[54477]: osdmap e320: 8 total, 8 up, 8 in 2026-03-10T08:42:25.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:24 vm03 ceph-mon[57160]: osdmap e320: 8 total, 8 up, 8 in 2026-03-10T08:42:25.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:24 vm03 ceph-mon[50703]: osdmap e320: 8 total, 8 up, 8 in 2026-03-10T08:42:26.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:25 vm06 ceph-mon[54477]: osdmap e321: 8 total, 8 up, 8 in 2026-03-10T08:42:26.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:25 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/1840546252' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:42:26.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:25 vm06 ceph-mon[54477]: from='client.24908 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:42:26.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:25 vm06 ceph-mon[54477]: pgmap v445: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 491 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:42:26.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:25 vm03 ceph-mon[57160]: osdmap e321: 8 total, 8 up, 8 in 2026-03-10T08:42:26.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:25 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/1840546252' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:42:26.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:25 vm03 ceph-mon[57160]: from='client.24908 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:42:26.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:25 vm03 ceph-mon[57160]: pgmap v445: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 491 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:42:26.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:25 vm03 ceph-mon[50703]: osdmap e321: 8 total, 8 up, 8 in 2026-03-10T08:42:26.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:25 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/1840546252' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:42:26.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:25 vm03 ceph-mon[50703]: from='client.24908 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:42:26.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:25 vm03 ceph-mon[50703]: pgmap v445: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 491 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:42:26.982 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_execute PASSED [ 81%] 2026-03-10T08:42:27.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:26 vm06 ceph-mon[54477]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:42:27.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:26 vm06 ceph-mon[54477]: from='client.24908 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:42:27.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:26 vm06 ceph-mon[54477]: osdmap e322: 8 total, 8 up, 8 in 2026-03-10T08:42:27.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:26 vm03 ceph-mon[57160]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:42:27.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:26 vm03 ceph-mon[57160]: from='client.24908 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:42:27.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:26 vm03 ceph-mon[57160]: osdmap e322: 8 total, 8 up, 8 in 2026-03-10T08:42:27.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:26 vm03 ceph-mon[50703]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:42:27.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:26 vm03 ceph-mon[50703]: from='client.24908 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:42:27.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:26 vm03 ceph-mon[50703]: osdmap e322: 8 total, 8 up, 8 in 2026-03-10T08:42:28.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:27 vm06 ceph-mon[54477]: osdmap e323: 8 total, 8 up, 8 in 2026-03-10T08:42:28.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:27 vm06 ceph-mon[54477]: pgmap v448: 164 pgs: 164 active+clean; 455 KiB data, 500 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:42:28.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:27 vm03 ceph-mon[57160]: osdmap e323: 8 total, 8 up, 8 in 2026-03-10T08:42:28.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:27 vm03 ceph-mon[57160]: pgmap v448: 164 pgs: 164 active+clean; 455 KiB data, 500 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:42:28.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:27 vm03 ceph-mon[50703]: osdmap e323: 8 total, 8 up, 8 in 2026-03-10T08:42:28.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:27 vm03 ceph-mon[50703]: pgmap v448: 164 pgs: 164 active+clean; 455 KiB data, 500 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:42:29.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:28 vm06 ceph-mon[54477]: osdmap e324: 8 total, 8 up, 8 in 2026-03-10T08:42:29.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:28 vm03 ceph-mon[57160]: osdmap e324: 8 total, 8 up, 8 in 2026-03-10T08:42:29.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:28 vm03 ceph-mon[50703]: osdmap e324: 8 total, 8 up, 8 in 2026-03-10T08:42:29.928 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:42:29 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: ::ffff:192.168.123.106 - - [10/Mar/2026:08:42:29] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T08:42:30.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:30 vm06 ceph-mon[54477]: osdmap e325: 8 total, 8 up, 8 in 2026-03-10T08:42:30.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:30 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/909204247' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:42:30.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:30 vm06 ceph-mon[54477]: pgmap v451: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 500 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:42:30.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:30 vm03 ceph-mon[57160]: osdmap e325: 8 total, 8 up, 8 in 2026-03-10T08:42:30.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:30 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/909204247' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:42:30.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:30 vm03 ceph-mon[57160]: pgmap v451: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 500 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:42:30.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:30 vm03 ceph-mon[50703]: osdmap e325: 8 total, 8 up, 8 in 2026-03-10T08:42:30.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:30 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/909204247' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:42:30.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:30 vm03 ceph-mon[50703]: pgmap v451: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 500 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:42:31.022 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_execute PASSED [ 82%] 2026-03-10T08:42:31.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:31 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/909204247' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:42:31.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:31 vm06 ceph-mon[54477]: osdmap e326: 8 total, 8 up, 8 in 2026-03-10T08:42:31.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:31 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/909204247' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:42:31.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:31 vm03 ceph-mon[57160]: osdmap e326: 8 total, 8 up, 8 in 2026-03-10T08:42:31.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:31 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/909204247' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:42:31.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:31 vm03 ceph-mon[50703]: osdmap e326: 8 total, 8 up, 8 in 2026-03-10T08:42:32.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:32 vm06 ceph-mon[54477]: osdmap e327: 8 total, 8 up, 8 in 2026-03-10T08:42:32.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:32 vm06 ceph-mon[54477]: pgmap v454: 164 pgs: 164 active+clean; 455 KiB data, 500 MiB used, 159 GiB / 160 GiB avail 2026-03-10T08:42:32.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:32 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:42:32.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:32 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:42:32.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:32 vm03 ceph-mon[57160]: osdmap e327: 8 total, 8 up, 8 in 2026-03-10T08:42:32.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:32 vm03 ceph-mon[57160]: pgmap v454: 164 pgs: 164 active+clean; 455 KiB data, 500 MiB used, 159 GiB / 160 GiB avail 2026-03-10T08:42:32.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:32 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:42:32.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:32 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:42:32.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:32 vm03 ceph-mon[50703]: osdmap e327: 8 total, 8 up, 8 in 2026-03-10T08:42:32.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:32 vm03 ceph-mon[50703]: pgmap v454: 164 pgs: 164 active+clean; 455 KiB data, 500 MiB used, 159 GiB / 160 GiB avail 2026-03-10T08:42:32.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:32 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:42:32.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:32 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:42:32.839 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:42:32 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a[78511]: debug there is no tcmu-runner data available 2026-03-10T08:42:33.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:33 vm06 ceph-mon[54477]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:42:33.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:33 vm06 ceph-mon[54477]: osdmap e328: 8 total, 8 up, 8 in 2026-03-10T08:42:33.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:33 vm03 ceph-mon[57160]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:42:33.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:33 vm03 ceph-mon[57160]: osdmap e328: 8 total, 8 up, 8 in 2026-03-10T08:42:33.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:33 vm03 ceph-mon[50703]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:42:33.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:33 vm03 ceph-mon[50703]: osdmap e328: 8 total, 8 up, 8 in 2026-03-10T08:42:34.329 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:34 vm03 ceph-mon[57160]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:42:34.329 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:34 vm03 ceph-mon[57160]: osdmap e329: 8 total, 8 up, 8 in 2026-03-10T08:42:34.329 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:34 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/3694988356' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:42:34.329 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:34 vm03 ceph-mon[57160]: from='client.24917 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:42:34.329 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:34 vm03 ceph-mon[57160]: pgmap v457: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 500 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:42:34.329 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:34 vm03 ceph-mon[50703]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:42:34.329 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:34 vm03 ceph-mon[50703]: osdmap e329: 8 total, 8 up, 8 in 2026-03-10T08:42:34.329 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:34 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/3694988356' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:42:34.329 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:34 vm03 ceph-mon[50703]: from='client.24917 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:42:34.329 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:34 vm03 ceph-mon[50703]: pgmap v457: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 500 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:42:34.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:34 vm06 ceph-mon[54477]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:42:34.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:34 vm06 ceph-mon[54477]: osdmap e329: 8 total, 8 up, 8 in 2026-03-10T08:42:34.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:34 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/3694988356' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:42:34.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:34 vm06 ceph-mon[54477]: from='client.24917 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:42:34.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:34 vm06 ceph-mon[54477]: pgmap v457: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 500 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:42:35.051 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_setxattr PASSED [ 83%] 2026-03-10T08:42:35.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:35 vm06 ceph-mon[54477]: from='client.24917 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:42:35.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:35 vm06 ceph-mon[54477]: osdmap e330: 8 total, 8 up, 8 in 2026-03-10T08:42:35.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:35 vm06 ceph-mon[54477]: osdmap e331: 8 total, 8 up, 8 in 2026-03-10T08:42:35.427 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:35 vm03 ceph-mon[57160]: from='client.24917 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:42:35.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:35 vm03 ceph-mon[57160]: osdmap e330: 8 total, 8 up, 8 in 2026-03-10T08:42:35.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:35 vm03 ceph-mon[57160]: osdmap e331: 8 total, 8 up, 8 in 2026-03-10T08:42:35.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:35 vm03 ceph-mon[50703]: from='client.24917 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:42:35.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:35 vm03 ceph-mon[50703]: osdmap e330: 8 total, 8 up, 8 in 2026-03-10T08:42:35.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:35 vm03 ceph-mon[50703]: osdmap e331: 8 total, 8 up, 8 in 2026-03-10T08:42:36.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:36 vm03 ceph-mon[57160]: pgmap v460: 164 pgs: 164 active+clean; 455 KiB data, 500 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:42:36.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:36 vm03 ceph-mon[57160]: osdmap e332: 8 total, 8 up, 8 in 2026-03-10T08:42:36.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:36 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/3719305035' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T08:42:36.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:36 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/3719305035' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app1"}]: dispatch 2026-03-10T08:42:36.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:36 vm03 ceph-mon[50703]: pgmap v460: 164 pgs: 164 active+clean; 455 KiB data, 500 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:42:36.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:36 vm03 ceph-mon[50703]: osdmap e332: 8 total, 8 up, 8 in 2026-03-10T08:42:36.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:36 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/3719305035' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T08:42:36.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:36 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/3719305035' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app1"}]: dispatch 2026-03-10T08:42:36.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:36 vm06 ceph-mon[54477]: pgmap v460: 164 pgs: 164 active+clean; 455 KiB data, 500 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:42:36.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:36 vm06 ceph-mon[54477]: osdmap e332: 8 total, 8 up, 8 in 2026-03-10T08:42:36.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:36 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/3719305035' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T08:42:36.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:36 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/3719305035' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app1"}]: dispatch 2026-03-10T08:42:38.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:38 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/3719305035' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test_pool","app": "app1"}]': finished 2026-03-10T08:42:38.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:38 vm06 ceph-mon[54477]: osdmap e333: 8 total, 8 up, 8 in 2026-03-10T08:42:38.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:38 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/3719305035' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2"}]: dispatch 2026-03-10T08:42:38.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:38 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/3719305035' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T08:42:38.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:38 vm06 ceph-mon[54477]: pgmap v463: 196 pgs: 196 active+clean; 455 KiB data, 505 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:42:38.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:38 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/3719305035' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test_pool","app": "app1"}]': finished 2026-03-10T08:42:38.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:38 vm03 ceph-mon[57160]: osdmap e333: 8 total, 8 up, 8 in 2026-03-10T08:42:38.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:38 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/3719305035' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2"}]: dispatch 2026-03-10T08:42:38.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:38 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/3719305035' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T08:42:38.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:38 vm03 ceph-mon[57160]: pgmap v463: 196 pgs: 196 active+clean; 455 KiB data, 505 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:42:38.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:38 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/3719305035' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test_pool","app": "app1"}]': finished 2026-03-10T08:42:38.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:38 vm03 ceph-mon[50703]: osdmap e333: 8 total, 8 up, 8 in 2026-03-10T08:42:38.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:38 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/3719305035' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2"}]: dispatch 2026-03-10T08:42:38.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:38 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/3719305035' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T08:42:38.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:38 vm03 ceph-mon[50703]: pgmap v463: 196 pgs: 196 active+clean; 455 KiB data, 505 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:42:39.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:39 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/3719305035' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-10T08:42:39.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:39 vm06 ceph-mon[54477]: osdmap e334: 8 total, 8 up, 8 in 2026-03-10T08:42:39.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:39 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/3719305035' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"dne","key":"key","value":"key"}]: dispatch 2026-03-10T08:42:39.339 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:39 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/3719305035' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key1","value":"val1"}]: dispatch 2026-03-10T08:42:39.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:39 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/3719305035' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-10T08:42:39.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:39 vm03 ceph-mon[57160]: osdmap e334: 8 total, 8 up, 8 in 2026-03-10T08:42:39.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:39 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/3719305035' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"dne","key":"key","value":"key"}]: dispatch 2026-03-10T08:42:39.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:39 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/3719305035' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key1","value":"val1"}]: dispatch 2026-03-10T08:42:39.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:39 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/3719305035' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-10T08:42:39.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:39 vm03 ceph-mon[50703]: osdmap e334: 8 total, 8 up, 8 in 2026-03-10T08:42:39.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:39 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/3719305035' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"dne","key":"key","value":"key"}]: dispatch 2026-03-10T08:42:39.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:39 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/3719305035' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key1","value":"val1"}]: dispatch 2026-03-10T08:42:39.928 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:42:39 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: ::ffff:192.168.123.106 - - [10/Mar/2026:08:42:39] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T08:42:40.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:40 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/3719305035' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key1","value":"val1"}]': finished 2026-03-10T08:42:40.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:40 vm03 ceph-mon[57160]: osdmap e335: 8 total, 8 up, 8 in 2026-03-10T08:42:40.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:40 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/3719305035' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key2","value":"val2"}]: dispatch 2026-03-10T08:42:40.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:40 vm03 ceph-mon[57160]: pgmap v466: 196 pgs: 196 active+clean; 455 KiB data, 505 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:42:40.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:40 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/3719305035' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key1","value":"val1"}]': finished 2026-03-10T08:42:40.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:40 vm03 ceph-mon[50703]: osdmap e335: 8 total, 8 up, 8 in 2026-03-10T08:42:40.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:40 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/3719305035' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key2","value":"val2"}]: dispatch 2026-03-10T08:42:40.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:40 vm03 ceph-mon[50703]: pgmap v466: 196 pgs: 196 active+clean; 455 KiB data, 505 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:42:40.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:40 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/3719305035' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key1","value":"val1"}]': finished 2026-03-10T08:42:40.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:40 vm06 ceph-mon[54477]: osdmap e335: 8 total, 8 up, 8 in 2026-03-10T08:42:40.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:40 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/3719305035' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key2","value":"val2"}]: dispatch 2026-03-10T08:42:40.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:40 vm06 ceph-mon[54477]: pgmap v466: 196 pgs: 196 active+clean; 455 KiB data, 505 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:42:41.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:41 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/3719305035' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key2","value":"val2"}]': finished 2026-03-10T08:42:41.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:41 vm03 ceph-mon[57160]: osdmap e336: 8 total, 8 up, 8 in 2026-03-10T08:42:41.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:41 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/3719305035' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app2","key":"key1","value":"val1"}]: dispatch 2026-03-10T08:42:41.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:41 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/3719305035' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"test_pool","app":"app2","key":"key1","value":"val1"}]': finished 2026-03-10T08:42:41.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:41 vm03 ceph-mon[57160]: osdmap e337: 8 total, 8 up, 8 in 2026-03-10T08:42:41.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:41 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/3719305035' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"test_pool","app":"app1","key":"key1"}]: dispatch 2026-03-10T08:42:41.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:41 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/3719305035' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key2","value":"val2"}]': finished 2026-03-10T08:42:41.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:41 vm03 ceph-mon[50703]: osdmap e336: 8 total, 8 up, 8 in 2026-03-10T08:42:41.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:41 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/3719305035' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app2","key":"key1","value":"val1"}]: dispatch 2026-03-10T08:42:41.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:41 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/3719305035' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"test_pool","app":"app2","key":"key1","value":"val1"}]': finished 2026-03-10T08:42:41.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:41 vm03 ceph-mon[50703]: osdmap e337: 8 total, 8 up, 8 in 2026-03-10T08:42:41.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:41 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/3719305035' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"test_pool","app":"app1","key":"key1"}]: dispatch 2026-03-10T08:42:41.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:41 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/3719305035' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key2","value":"val2"}]': finished 2026-03-10T08:42:41.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:41 vm06 ceph-mon[54477]: osdmap e336: 8 total, 8 up, 8 in 2026-03-10T08:42:41.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:41 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/3719305035' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app2","key":"key1","value":"val1"}]: dispatch 2026-03-10T08:42:41.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:41 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/3719305035' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"test_pool","app":"app2","key":"key1","value":"val1"}]': finished 2026-03-10T08:42:41.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:41 vm06 ceph-mon[54477]: osdmap e337: 8 total, 8 up, 8 in 2026-03-10T08:42:41.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:41 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/3719305035' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"test_pool","app":"app1","key":"key1"}]: dispatch 2026-03-10T08:42:42.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:42 vm03 ceph-mon[57160]: pgmap v469: 196 pgs: 196 active+clean; 455 KiB data, 505 MiB used, 159 GiB / 160 GiB avail 2026-03-10T08:42:42.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:42 vm03 ceph-mon[50703]: pgmap v469: 196 pgs: 196 active+clean; 455 KiB data, 505 MiB used, 159 GiB / 160 GiB avail 2026-03-10T08:42:42.580 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:42 vm06 ceph-mon[54477]: pgmap v469: 196 pgs: 196 active+clean; 455 KiB data, 505 MiB used, 159 GiB / 160 GiB avail 2026-03-10T08:42:42.839 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:42:42 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a[78511]: debug there is no tcmu-runner data available 2026-03-10T08:42:43.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:43 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/3719305035' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"test_pool","app":"app1","key":"key1"}]': finished 2026-03-10T08:42:43.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:43 vm03 ceph-mon[57160]: osdmap e338: 8 total, 8 up, 8 in 2026-03-10T08:42:43.428 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:43 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/3719305035' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:42:43.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:43 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/3719305035' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"test_pool","app":"app1","key":"key1"}]': finished 2026-03-10T08:42:43.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:43 vm03 ceph-mon[50703]: osdmap e338: 8 total, 8 up, 8 in 2026-03-10T08:42:43.428 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:43 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/3719305035' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:42:43.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:43 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/3719305035' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"test_pool","app":"app1","key":"key1"}]': finished 2026-03-10T08:42:43.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:43 vm06 ceph-mon[54477]: osdmap e338: 8 total, 8 up, 8 in 2026-03-10T08:42:43.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:43 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/3719305035' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:42:44.189 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_applications PASSED [ 84%] 2026-03-10T08:42:44.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:44 vm06 ceph-mon[54477]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:42:44.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:44 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/3719305035' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:42:44.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:44 vm06 ceph-mon[54477]: osdmap e339: 8 total, 8 up, 8 in 2026-03-10T08:42:44.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:44 vm06 ceph-mon[54477]: pgmap v472: 196 pgs: 196 active+clean; 455 KiB data, 505 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:42:44.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:44 vm03 ceph-mon[57160]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:42:44.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:44 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/3719305035' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:42:44.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:44 vm03 ceph-mon[57160]: osdmap e339: 8 total, 8 up, 8 in 2026-03-10T08:42:44.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:44 vm03 ceph-mon[57160]: pgmap v472: 196 pgs: 196 active+clean; 455 KiB data, 505 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:42:44.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:44 vm03 ceph-mon[50703]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:42:44.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:44 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/3719305035' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:42:44.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:44 vm03 ceph-mon[50703]: osdmap e339: 8 total, 8 up, 8 in 2026-03-10T08:42:44.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:44 vm03 ceph-mon[50703]: pgmap v472: 196 pgs: 196 active+clean; 455 KiB data, 505 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:42:45.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:45 vm06 ceph-mon[54477]: osdmap e340: 8 total, 8 up, 8 in 2026-03-10T08:42:45.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:45 vm03 ceph-mon[57160]: osdmap e340: 8 total, 8 up, 8 in 2026-03-10T08:42:45.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:45 vm03 ceph-mon[50703]: osdmap e340: 8 total, 8 up, 8 in 2026-03-10T08:42:46.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:46 vm06 ceph-mon[54477]: osdmap e341: 8 total, 8 up, 8 in 2026-03-10T08:42:46.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:46 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/1100997774' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:42:46.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:46 vm06 ceph-mon[54477]: pgmap v475: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 505 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:42:46.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:46 vm03 ceph-mon[57160]: osdmap e341: 8 total, 8 up, 8 in 2026-03-10T08:42:46.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:46 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/1100997774' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:42:46.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:46 vm03 ceph-mon[57160]: pgmap v475: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 505 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:42:46.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:46 vm03 ceph-mon[50703]: osdmap e341: 8 total, 8 up, 8 in 2026-03-10T08:42:46.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:46 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/1100997774' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:42:46.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:46 vm03 ceph-mon[50703]: pgmap v475: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 505 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:42:47.220 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_service_daemon PASSED [ 85%] 2026-03-10T08:42:47.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:47 vm06 ceph-mon[54477]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:42:47.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:47 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/1100997774' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:42:47.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:47 vm06 ceph-mon[54477]: osdmap e342: 8 total, 8 up, 8 in 2026-03-10T08:42:47.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:47 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:42:47.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:47 vm03 ceph-mon[57160]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:42:47.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:47 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/1100997774' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:42:47.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:47 vm03 ceph-mon[57160]: osdmap e342: 8 total, 8 up, 8 in 2026-03-10T08:42:47.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:47 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:42:47.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:47 vm03 ceph-mon[50703]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:42:47.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:47 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/1100997774' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:42:47.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:47 vm03 ceph-mon[50703]: osdmap e342: 8 total, 8 up, 8 in 2026-03-10T08:42:47.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:47 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:42:48.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:48 vm06 ceph-mon[54477]: osdmap e343: 8 total, 8 up, 8 in 2026-03-10T08:42:48.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:48 vm06 ceph-mon[54477]: pgmap v478: 164 pgs: 164 active+clean; 455 KiB data, 506 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:42:48.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:48 vm03 ceph-mon[57160]: osdmap e343: 8 total, 8 up, 8 in 2026-03-10T08:42:48.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:48 vm03 ceph-mon[57160]: pgmap v478: 164 pgs: 164 active+clean; 455 KiB data, 506 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:42:48.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:48 vm03 ceph-mon[50703]: osdmap e343: 8 total, 8 up, 8 in 2026-03-10T08:42:48.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:48 vm03 ceph-mon[50703]: pgmap v478: 164 pgs: 164 active+clean; 455 KiB data, 506 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:42:49.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:49 vm06 ceph-mon[54477]: osdmap e344: 8 total, 8 up, 8 in 2026-03-10T08:42:49.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:49 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/3546886066' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:42:49.589 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:49 vm06 ceph-mon[54477]: from='client.24949 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:42:49.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:49 vm03 ceph-mon[57160]: osdmap e344: 8 total, 8 up, 8 in 2026-03-10T08:42:49.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:49 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/3546886066' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:42:49.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:49 vm03 ceph-mon[57160]: from='client.24949 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:42:49.678 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:42:49 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: ::ffff:192.168.123.106 - - [10/Mar/2026:08:42:49] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T08:42:49.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:49 vm03 ceph-mon[50703]: osdmap e344: 8 total, 8 up, 8 in 2026-03-10T08:42:49.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:49 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/3546886066' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:42:49.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:49 vm03 ceph-mon[50703]: from='client.24949 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:42:50.371 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_alignment PASSED [ 86%] 2026-03-10T08:42:50.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:50 vm03 ceph-mon[57160]: from='client.24949 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:42:50.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:50 vm03 ceph-mon[57160]: osdmap e345: 8 total, 8 up, 8 in 2026-03-10T08:42:50.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:50 vm03 ceph-mon[57160]: pgmap v481: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 506 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:42:50.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:50 vm03 ceph-mon[50703]: from='client.24949 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:42:50.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:50 vm03 ceph-mon[50703]: osdmap e345: 8 total, 8 up, 8 in 2026-03-10T08:42:50.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:50 vm03 ceph-mon[50703]: pgmap v481: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 506 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:42:50.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:50 vm06 ceph-mon[54477]: from='client.24949 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:42:50.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:50 vm06 ceph-mon[54477]: osdmap e345: 8 total, 8 up, 8 in 2026-03-10T08:42:50.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:50 vm06 ceph-mon[54477]: pgmap v481: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 506 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:42:51.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:51 vm03 ceph-mon[57160]: osdmap e346: 8 total, 8 up, 8 in 2026-03-10T08:42:51.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:51 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/2905381744' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-ec", "profile": ["k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T08:42:51.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:51 vm03 ceph-mon[57160]: from='client.24955 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-ec", "profile": ["k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T08:42:51.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:51 vm03 ceph-mon[50703]: osdmap e346: 8 total, 8 up, 8 in 2026-03-10T08:42:51.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:51 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/2905381744' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-ec", "profile": ["k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T08:42:51.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:51 vm03 ceph-mon[50703]: from='client.24955 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-ec", "profile": ["k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T08:42:51.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:51 vm06 ceph-mon[54477]: osdmap e346: 8 total, 8 up, 8 in 2026-03-10T08:42:51.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:51 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/2905381744' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-ec", "profile": ["k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T08:42:51.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:51 vm06 ceph-mon[54477]: from='client.24955 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-ec", "profile": ["k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T08:42:52.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:52 vm03 ceph-mon[57160]: pgmap v483: 164 pgs: 164 active+clean; 455 KiB data, 506 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:42:52.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:52 vm03 ceph-mon[57160]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:42:52.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:52 vm03 ceph-mon[57160]: from='client.24955 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-ec", "profile": ["k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T08:42:52.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:52 vm03 ceph-mon[57160]: osdmap e347: 8 total, 8 up, 8 in 2026-03-10T08:42:52.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:52 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/2905381744' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 8, "pgp_num": 8, "pool": "test-ec", "pool_type": "erasure", "erasure_code_profile": "testprofile-test-ec"}]: dispatch 2026-03-10T08:42:52.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:52 vm03 ceph-mon[57160]: from='client.24955 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 8, "pgp_num": 8, "pool": "test-ec", "pool_type": "erasure", "erasure_code_profile": "testprofile-test-ec"}]: dispatch 2026-03-10T08:42:52.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:52 vm03 ceph-mon[50703]: pgmap v483: 164 pgs: 164 active+clean; 455 KiB data, 506 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:42:52.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:52 vm03 ceph-mon[50703]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:42:52.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:52 vm03 ceph-mon[50703]: from='client.24955 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-ec", "profile": ["k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T08:42:52.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:52 vm03 ceph-mon[50703]: osdmap e347: 8 total, 8 up, 8 in 2026-03-10T08:42:52.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:52 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/2905381744' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 8, "pgp_num": 8, "pool": "test-ec", "pool_type": "erasure", "erasure_code_profile": "testprofile-test-ec"}]: dispatch 2026-03-10T08:42:52.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:52 vm03 ceph-mon[50703]: from='client.24955 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 8, "pgp_num": 8, "pool": "test-ec", "pool_type": "erasure", "erasure_code_profile": "testprofile-test-ec"}]: dispatch 2026-03-10T08:42:52.839 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:42:52 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a[78511]: debug there is no tcmu-runner data available 2026-03-10T08:42:52.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:52 vm06 ceph-mon[54477]: pgmap v483: 164 pgs: 164 active+clean; 455 KiB data, 506 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:42:52.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:52 vm06 ceph-mon[54477]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:42:52.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:52 vm06 ceph-mon[54477]: from='client.24955 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-ec", "profile": ["k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T08:42:52.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:52 vm06 ceph-mon[54477]: osdmap e347: 8 total, 8 up, 8 in 2026-03-10T08:42:52.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:52 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/2905381744' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 8, "pgp_num": 8, "pool": "test-ec", "pool_type": "erasure", "erasure_code_profile": "testprofile-test-ec"}]: dispatch 2026-03-10T08:42:52.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:52 vm06 ceph-mon[54477]: from='client.24955 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 8, "pgp_num": 8, "pool": "test-ec", "pool_type": "erasure", "erasure_code_profile": "testprofile-test-ec"}]: dispatch 2026-03-10T08:42:53.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:53 vm03 ceph-mon[57160]: osdmap e348: 8 total, 8 up, 8 in 2026-03-10T08:42:53.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:53 vm03 ceph-mon[50703]: osdmap e348: 8 total, 8 up, 8 in 2026-03-10T08:42:53.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:53 vm06 ceph-mon[54477]: osdmap e348: 8 total, 8 up, 8 in 2026-03-10T08:42:54.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:54 vm03 ceph-mon[57160]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:42:54.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:54 vm03 ceph-mon[57160]: pgmap v486: 164 pgs: 164 active+clean; 455 KiB data, 514 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:42:54.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:54 vm03 ceph-mon[57160]: from='client.24955 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pg_num": 8, "pgp_num": 8, "pool": "test-ec", "pool_type": "erasure", "erasure_code_profile": "testprofile-test-ec"}]': finished 2026-03-10T08:42:54.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:54 vm03 ceph-mon[57160]: osdmap e349: 8 total, 8 up, 8 in 2026-03-10T08:42:54.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:54 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/2905381744' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:42:54.678 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:54 vm03 ceph-mon[57160]: from='client.24955 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:42:54.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:54 vm03 ceph-mon[50703]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:42:54.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:54 vm03 ceph-mon[50703]: pgmap v486: 164 pgs: 164 active+clean; 455 KiB data, 514 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:42:54.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:54 vm03 ceph-mon[50703]: from='client.24955 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pg_num": 8, "pgp_num": 8, "pool": "test-ec", "pool_type": "erasure", "erasure_code_profile": "testprofile-test-ec"}]': finished 2026-03-10T08:42:54.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:54 vm03 ceph-mon[50703]: osdmap e349: 8 total, 8 up, 8 in 2026-03-10T08:42:54.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:54 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/2905381744' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:42:54.678 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:54 vm03 ceph-mon[50703]: from='client.24955 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:42:54.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:54 vm06 ceph-mon[54477]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:42:54.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:54 vm06 ceph-mon[54477]: pgmap v486: 164 pgs: 164 active+clean; 455 KiB data, 514 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:42:54.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:54 vm06 ceph-mon[54477]: from='client.24955 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pg_num": 8, "pgp_num": 8, "pool": "test-ec", "pool_type": "erasure", "erasure_code_profile": "testprofile-test-ec"}]': finished 2026-03-10T08:42:54.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:54 vm06 ceph-mon[54477]: osdmap e349: 8 total, 8 up, 8 in 2026-03-10T08:42:54.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:54 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/2905381744' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:42:54.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:54 vm06 ceph-mon[54477]: from='client.24955 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:42:55.410 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctxEc::test_alignment PASSED [ 87%] 2026-03-10T08:42:55.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:55 vm06 ceph-mon[54477]: from='client.24955 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:42:55.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:55 vm06 ceph-mon[54477]: osdmap e350: 8 total, 8 up, 8 in 2026-03-10T08:42:55.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:55 vm06 ceph-mon[54477]: osdmap e351: 8 total, 8 up, 8 in 2026-03-10T08:42:55.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:55 vm03 ceph-mon[57160]: from='client.24955 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:42:55.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:55 vm03 ceph-mon[57160]: osdmap e350: 8 total, 8 up, 8 in 2026-03-10T08:42:55.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:55 vm03 ceph-mon[57160]: osdmap e351: 8 total, 8 up, 8 in 2026-03-10T08:42:55.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:55 vm03 ceph-mon[50703]: from='client.24955 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:42:55.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:55 vm03 ceph-mon[50703]: osdmap e350: 8 total, 8 up, 8 in 2026-03-10T08:42:55.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:55 vm03 ceph-mon[50703]: osdmap e351: 8 total, 8 up, 8 in 2026-03-10T08:42:56.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:56 vm06 ceph-mon[54477]: pgmap v489: 172 pgs: 8 unknown, 164 active+clean; 455 KiB data, 514 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:42:56.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:56 vm06 ceph-mon[54477]: osdmap e352: 8 total, 8 up, 8 in 2026-03-10T08:42:56.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:56 vm03 ceph-mon[57160]: pgmap v489: 172 pgs: 8 unknown, 164 active+clean; 455 KiB data, 514 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:42:56.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:56 vm03 ceph-mon[57160]: osdmap e352: 8 total, 8 up, 8 in 2026-03-10T08:42:56.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:56 vm03 ceph-mon[50703]: pgmap v489: 172 pgs: 8 unknown, 164 active+clean; 455 KiB data, 514 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:42:56.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:56 vm03 ceph-mon[50703]: osdmap e352: 8 total, 8 up, 8 in 2026-03-10T08:42:57.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:57 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/4291420904' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:42:57.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:57 vm06 ceph-mon[54477]: from='client.24958 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:42:57.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:57 vm06 ceph-mon[54477]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:42:57.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:57 vm06 ceph-mon[54477]: from='client.24958 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:42:57.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:57 vm06 ceph-mon[54477]: osdmap e353: 8 total, 8 up, 8 in 2026-03-10T08:42:57.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:57 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/4291420904' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:42:57.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:57 vm03 ceph-mon[57160]: from='client.24958 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:42:57.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:57 vm03 ceph-mon[57160]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:42:57.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:57 vm03 ceph-mon[57160]: from='client.24958 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:42:57.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:57 vm03 ceph-mon[57160]: osdmap e353: 8 total, 8 up, 8 in 2026-03-10T08:42:57.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:57 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/4291420904' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:42:57.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:57 vm03 ceph-mon[50703]: from='client.24958 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:42:57.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:57 vm03 ceph-mon[50703]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:42:57.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:57 vm03 ceph-mon[50703]: from='client.24958 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:42:57.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:57 vm03 ceph-mon[50703]: osdmap e353: 8 total, 8 up, 8 in 2026-03-10T08:42:58.430 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx2::test_get_last_version PASSED [ 89%] 2026-03-10T08:42:58.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:58 vm06 ceph-mon[54477]: pgmap v492: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 515 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:42:58.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:58 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:42:58.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:58 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:42:58.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:58 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:42:58.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:58 vm06 ceph-mon[54477]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:42:58.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:42:58 vm06 ceph-mon[54477]: osdmap e354: 8 total, 8 up, 8 in 2026-03-10T08:42:58.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:58 vm03 ceph-mon[57160]: pgmap v492: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 515 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:42:58.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:58 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:42:58.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:58 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:42:58.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:58 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:42:58.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:58 vm03 ceph-mon[57160]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:42:58.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:42:58 vm03 ceph-mon[57160]: osdmap e354: 8 total, 8 up, 8 in 2026-03-10T08:42:58.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:58 vm03 ceph-mon[50703]: pgmap v492: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 515 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:42:58.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:58 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T08:42:58.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:58 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T08:42:58.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:58 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T08:42:58.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:58 vm03 ceph-mon[50703]: from='mgr.14706 ' entity='mgr.y' 2026-03-10T08:42:58.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:42:58 vm03 ceph-mon[50703]: osdmap e354: 8 total, 8 up, 8 in 2026-03-10T08:42:59.928 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:42:59 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: ::ffff:192.168.123.106 - - [10/Mar/2026:08:42:59] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T08:43:00.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:00 vm06 ceph-mon[54477]: pgmap v495: 164 pgs: 164 active+clean; 455 KiB data, 515 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:43:00.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:00 vm06 ceph-mon[54477]: osdmap e355: 8 total, 8 up, 8 in 2026-03-10T08:43:00.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:00 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/1448091958' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:43:00.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:00 vm03 ceph-mon[57160]: pgmap v495: 164 pgs: 164 active+clean; 455 KiB data, 515 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:43:00.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:00 vm03 ceph-mon[57160]: osdmap e355: 8 total, 8 up, 8 in 2026-03-10T08:43:00.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:00 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/1448091958' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:43:00.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:00 vm03 ceph-mon[50703]: pgmap v495: 164 pgs: 164 active+clean; 455 KiB data, 515 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:43:00.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:00 vm03 ceph-mon[50703]: osdmap e355: 8 total, 8 up, 8 in 2026-03-10T08:43:00.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:00 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/1448091958' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:43:01.450 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx2::test_get_stats PASSED [ 90%] 2026-03-10T08:43:01.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:01 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/1448091958' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:43:01.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:01 vm06 ceph-mon[54477]: osdmap e356: 8 total, 8 up, 8 in 2026-03-10T08:43:01.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:01 vm06 ceph-mon[54477]: osdmap e357: 8 total, 8 up, 8 in 2026-03-10T08:43:01.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:01 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/1448091958' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:43:01.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:01 vm03 ceph-mon[57160]: osdmap e356: 8 total, 8 up, 8 in 2026-03-10T08:43:01.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:01 vm03 ceph-mon[57160]: osdmap e357: 8 total, 8 up, 8 in 2026-03-10T08:43:01.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:01 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/1448091958' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:43:01.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:01 vm03 ceph-mon[50703]: osdmap e356: 8 total, 8 up, 8 in 2026-03-10T08:43:01.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:01 vm03 ceph-mon[50703]: osdmap e357: 8 total, 8 up, 8 in 2026-03-10T08:43:02.839 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:43:02 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a[78511]: debug there is no tcmu-runner data available 2026-03-10T08:43:02.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:02 vm06 ceph-mon[54477]: pgmap v498: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 515 MiB used, 159 GiB / 160 GiB avail 2026-03-10T08:43:02.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:02 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:43:02.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:02 vm03 ceph-mon[57160]: pgmap v498: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 515 MiB used, 159 GiB / 160 GiB avail 2026-03-10T08:43:02.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:02 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:43:02.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:02 vm03 ceph-mon[50703]: pgmap v498: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 515 MiB used, 159 GiB / 160 GiB avail 2026-03-10T08:43:02.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:02 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:43:03.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:03 vm06 ceph-mon[54477]: osdmap e358: 8 total, 8 up, 8 in 2026-03-10T08:43:03.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:03 vm06 ceph-mon[54477]: osdmap e359: 8 total, 8 up, 8 in 2026-03-10T08:43:03.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:03 vm03 ceph-mon[57160]: osdmap e358: 8 total, 8 up, 8 in 2026-03-10T08:43:03.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:03 vm03 ceph-mon[57160]: osdmap e359: 8 total, 8 up, 8 in 2026-03-10T08:43:03.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:03 vm03 ceph-mon[50703]: osdmap e358: 8 total, 8 up, 8 in 2026-03-10T08:43:03.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:03 vm03 ceph-mon[50703]: osdmap e359: 8 total, 8 up, 8 in 2026-03-10T08:43:04.475 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestObject::test_read PASSED [ 91%] 2026-03-10T08:43:04.829 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:04 vm03 ceph-mon[57160]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:43:04.829 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:04 vm03 ceph-mon[57160]: pgmap v501: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 516 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:43:04.829 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:04 vm03 ceph-mon[57160]: osdmap e360: 8 total, 8 up, 8 in 2026-03-10T08:43:04.829 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:04 vm03 ceph-mon[50703]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:43:04.829 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:04 vm03 ceph-mon[50703]: pgmap v501: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 516 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:43:04.829 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:04 vm03 ceph-mon[50703]: osdmap e360: 8 total, 8 up, 8 in 2026-03-10T08:43:04.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:04 vm06 ceph-mon[54477]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:43:04.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:04 vm06 ceph-mon[54477]: pgmap v501: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 516 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:43:04.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:04 vm06 ceph-mon[54477]: osdmap e360: 8 total, 8 up, 8 in 2026-03-10T08:43:06.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:06 vm06 ceph-mon[54477]: pgmap v504: 164 pgs: 164 active+clean; 455 KiB data, 516 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:43:06.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:06 vm06 ceph-mon[54477]: osdmap e361: 8 total, 8 up, 8 in 2026-03-10T08:43:06.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:06 vm06 ceph-mon[54477]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:43:06.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:06 vm03 ceph-mon[57160]: pgmap v504: 164 pgs: 164 active+clean; 455 KiB data, 516 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:43:06.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:06 vm03 ceph-mon[57160]: osdmap e361: 8 total, 8 up, 8 in 2026-03-10T08:43:06.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:06 vm03 ceph-mon[57160]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:43:06.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:06 vm03 ceph-mon[50703]: pgmap v504: 164 pgs: 164 active+clean; 455 KiB data, 516 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:43:06.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:06 vm03 ceph-mon[50703]: osdmap e361: 8 total, 8 up, 8 in 2026-03-10T08:43:06.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:06 vm03 ceph-mon[50703]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:43:07.501 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestObject::test_seek PASSED [ 92%] 2026-03-10T08:43:07.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:07 vm06 ceph-mon[54477]: osdmap e362: 8 total, 8 up, 8 in 2026-03-10T08:43:07.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:07 vm03 ceph-mon[57160]: osdmap e362: 8 total, 8 up, 8 in 2026-03-10T08:43:07.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:07 vm03 ceph-mon[50703]: osdmap e362: 8 total, 8 up, 8 in 2026-03-10T08:43:08.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:08 vm06 ceph-mon[54477]: pgmap v507: 196 pgs: 196 active+clean; 455 KiB data, 516 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T08:43:08.839 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:08 vm06 ceph-mon[54477]: osdmap e363: 8 total, 8 up, 8 in 2026-03-10T08:43:08.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:08 vm03 ceph-mon[57160]: pgmap v507: 196 pgs: 196 active+clean; 455 KiB data, 516 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T08:43:08.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:08 vm03 ceph-mon[57160]: osdmap e363: 8 total, 8 up, 8 in 2026-03-10T08:43:08.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:08 vm03 ceph-mon[50703]: pgmap v507: 196 pgs: 196 active+clean; 455 KiB data, 516 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T08:43:08.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:08 vm03 ceph-mon[50703]: osdmap e363: 8 total, 8 up, 8 in 2026-03-10T08:43:09.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:09 vm03 ceph-mon[57160]: osdmap e364: 8 total, 8 up, 8 in 2026-03-10T08:43:09.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:09 vm03 ceph-mon[57160]: pgmap v510: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 516 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:43:09.928 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:43:09 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: ::ffff:192.168.123.106 - - [10/Mar/2026:08:43:09] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T08:43:09.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:09 vm03 ceph-mon[50703]: osdmap e364: 8 total, 8 up, 8 in 2026-03-10T08:43:09.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:09 vm03 ceph-mon[50703]: pgmap v510: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 516 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:43:10.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:09 vm06 ceph-mon[54477]: osdmap e364: 8 total, 8 up, 8 in 2026-03-10T08:43:10.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:09 vm06 ceph-mon[54477]: pgmap v510: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 516 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:43:10.625 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestObject::test_write PASSED [ 93%] 2026-03-10T08:43:10.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:10 vm03 ceph-mon[57160]: osdmap e365: 8 total, 8 up, 8 in 2026-03-10T08:43:10.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:10 vm03 ceph-mon[50703]: osdmap e365: 8 total, 8 up, 8 in 2026-03-10T08:43:11.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:10 vm06 ceph-mon[54477]: osdmap e365: 8 total, 8 up, 8 in 2026-03-10T08:43:11.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:11 vm03 ceph-mon[57160]: osdmap e366: 8 total, 8 up, 8 in 2026-03-10T08:43:11.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:11 vm03 ceph-mon[57160]: pgmap v513: 164 pgs: 164 active+clean; 455 KiB data, 516 MiB used, 159 GiB / 160 GiB avail 2026-03-10T08:43:11.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:11 vm03 ceph-mon[57160]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:43:11.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:11 vm03 ceph-mon[50703]: osdmap e366: 8 total, 8 up, 8 in 2026-03-10T08:43:11.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:11 vm03 ceph-mon[50703]: pgmap v513: 164 pgs: 164 active+clean; 455 KiB data, 516 MiB used, 159 GiB / 160 GiB avail 2026-03-10T08:43:11.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:11 vm03 ceph-mon[50703]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:43:12.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:11 vm06 ceph-mon[54477]: osdmap e366: 8 total, 8 up, 8 in 2026-03-10T08:43:12.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:11 vm06 ceph-mon[54477]: pgmap v513: 164 pgs: 164 active+clean; 455 KiB data, 516 MiB used, 159 GiB / 160 GiB avail 2026-03-10T08:43:12.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:11 vm06 ceph-mon[54477]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:43:12.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:12 vm03 ceph-mon[57160]: osdmap e367: 8 total, 8 up, 8 in 2026-03-10T08:43:12.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:12 vm03 ceph-mon[50703]: osdmap e367: 8 total, 8 up, 8 in 2026-03-10T08:43:13.089 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:43:12 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a[78511]: debug there is no tcmu-runner data available 2026-03-10T08:43:13.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:12 vm06 ceph-mon[54477]: osdmap e367: 8 total, 8 up, 8 in 2026-03-10T08:43:13.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:13 vm03 ceph-mon[57160]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:43:13.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:13 vm03 ceph-mon[57160]: osdmap e368: 8 total, 8 up, 8 in 2026-03-10T08:43:13.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:13 vm03 ceph-mon[57160]: pgmap v516: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 517 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:43:13.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:13 vm03 ceph-mon[50703]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:43:13.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:13 vm03 ceph-mon[50703]: osdmap e368: 8 total, 8 up, 8 in 2026-03-10T08:43:13.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:13 vm03 ceph-mon[50703]: pgmap v516: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 517 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:43:14.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:13 vm06 ceph-mon[54477]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:43:14.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:13 vm06 ceph-mon[54477]: osdmap e368: 8 total, 8 up, 8 in 2026-03-10T08:43:14.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:13 vm06 ceph-mon[54477]: pgmap v516: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 517 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:43:14.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:14 vm03 ceph-mon[57160]: osdmap e369: 8 total, 8 up, 8 in 2026-03-10T08:43:14.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:14 vm03 ceph-mon[50703]: osdmap e369: 8 total, 8 up, 8 in 2026-03-10T08:43:15.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:14 vm06 ceph-mon[54477]: osdmap e369: 8 total, 8 up, 8 in 2026-03-10T08:43:15.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:15 vm03 ceph-mon[57160]: osdmap e370: 8 total, 8 up, 8 in 2026-03-10T08:43:15.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:15 vm03 ceph-mon[57160]: pgmap v519: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 517 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:43:15.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:15 vm03 ceph-mon[50703]: osdmap e370: 8 total, 8 up, 8 in 2026-03-10T08:43:15.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:15 vm03 ceph-mon[50703]: pgmap v519: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 517 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:43:16.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:15 vm06 ceph-mon[54477]: osdmap e370: 8 total, 8 up, 8 in 2026-03-10T08:43:16.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:15 vm06 ceph-mon[54477]: pgmap v519: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 517 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:43:17.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:16 vm06 ceph-mon[54477]: osdmap e371: 8 total, 8 up, 8 in 2026-03-10T08:43:17.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:16 vm06 ceph-mon[54477]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:43:17.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:16 vm03 ceph-mon[57160]: osdmap e371: 8 total, 8 up, 8 in 2026-03-10T08:43:17.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:16 vm03 ceph-mon[57160]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:43:17.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:16 vm03 ceph-mon[50703]: osdmap e371: 8 total, 8 up, 8 in 2026-03-10T08:43:17.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:16 vm03 ceph-mon[50703]: from='mgr.14706 v1:192.168.123.103:0/3398328962' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T08:43:18.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:17 vm06 ceph-mon[54477]: osdmap e372: 8 total, 8 up, 8 in 2026-03-10T08:43:18.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:17 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/388348287' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:43:18.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:17 vm06 ceph-mon[54477]: from='client.24953 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:43:18.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:17 vm06 ceph-mon[54477]: pgmap v522: 196 pgs: 1 active+clean+snaptrim, 195 active+clean; 455 KiB data, 517 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T08:43:18.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:17 vm03 ceph-mon[57160]: osdmap e372: 8 total, 8 up, 8 in 2026-03-10T08:43:18.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:17 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/388348287' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:43:18.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:17 vm03 ceph-mon[57160]: from='client.24953 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:43:18.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:17 vm03 ceph-mon[57160]: pgmap v522: 196 pgs: 1 active+clean+snaptrim, 195 active+clean; 455 KiB data, 517 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T08:43:18.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:17 vm03 ceph-mon[50703]: osdmap e372: 8 total, 8 up, 8 in 2026-03-10T08:43:18.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:17 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/388348287' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:43:18.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:17 vm03 ceph-mon[50703]: from='client.24953 ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T08:43:18.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:17 vm03 ceph-mon[50703]: pgmap v522: 196 pgs: 1 active+clean+snaptrim, 195 active+clean; 455 KiB data, 517 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T08:43:18.713 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoCtxSelfManagedSnaps::test PASSED [ 94%] 2026-03-10T08:43:18.730 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestCommand::test_monmap_dump PASSED [ 95%] 2026-03-10T08:43:18.745 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestCommand::test_osd_bench PASSED [ 96%] 2026-03-10T08:43:19.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:18 vm06 ceph-mon[54477]: from='client.24953 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:43:19.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:18 vm06 ceph-mon[54477]: osdmap e373: 8 total, 8 up, 8 in 2026-03-10T08:43:19.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:18 vm03 ceph-mon[57160]: from='client.24953 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:43:19.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:18 vm03 ceph-mon[57160]: osdmap e373: 8 total, 8 up, 8 in 2026-03-10T08:43:19.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:18 vm03 ceph-mon[50703]: from='client.24953 ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T08:43:19.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:18 vm03 ceph-mon[50703]: osdmap e373: 8 total, 8 up, 8 in 2026-03-10T08:43:19.727 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestCommand::test_ceph_osd_pool_create_utf8 PASSED [ 97%] 2026-03-10T08:43:19.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:19 vm03 ceph-mon[57160]: osdmap e374: 8 total, 8 up, 8 in 2026-03-10T08:43:19.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:19 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/1165701880' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch 2026-03-10T08:43:19.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:19 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/1165701880' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T08:43:19.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:19 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/1165701880' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json", "epoch": 1003}]: dispatch 2026-03-10T08:43:19.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:19 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/62193952' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 16, "pool": "\u9ec5"}]: dispatch 2026-03-10T08:43:19.928 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:19 vm03 ceph-mon[57160]: pgmap v525: 164 pgs: 164 active+clean; 455 KiB data, 517 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:43:19.928 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:43:19 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: ::ffff:192.168.123.106 - - [10/Mar/2026:08:43:19] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T08:43:19.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:19 vm03 ceph-mon[50703]: osdmap e374: 8 total, 8 up, 8 in 2026-03-10T08:43:19.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:19 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/1165701880' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch 2026-03-10T08:43:19.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:19 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/1165701880' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T08:43:19.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:19 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/1165701880' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json", "epoch": 1003}]: dispatch 2026-03-10T08:43:19.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:19 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/62193952' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 16, "pool": "\u9ec5"}]: dispatch 2026-03-10T08:43:19.928 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:19 vm03 ceph-mon[50703]: pgmap v525: 164 pgs: 164 active+clean; 455 KiB data, 517 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:43:20.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:19 vm06 ceph-mon[54477]: osdmap e374: 8 total, 8 up, 8 in 2026-03-10T08:43:20.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:19 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/1165701880' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch 2026-03-10T08:43:20.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:19 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/1165701880' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T08:43:20.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:19 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/1165701880' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json", "epoch": 1003}]: dispatch 2026-03-10T08:43:20.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:19 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/62193952' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 16, "pool": "\u9ec5"}]: dispatch 2026-03-10T08:43:20.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:19 vm06 ceph-mon[54477]: pgmap v525: 164 pgs: 164 active+clean; 455 KiB data, 517 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:43:21.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:20 vm06 ceph-mon[54477]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:43:21.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:20 vm06 ceph-mon[54477]: from='client.? v1:192.168.123.103:0/62193952' entity='client.admin' cmd='[{"prefix": "osd pool create", "pg_num": 16, "pool": "\u9ec5"}]': finished 2026-03-10T08:43:21.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:20 vm06 ceph-mon[54477]: osdmap e375: 8 total, 8 up, 8 in 2026-03-10T08:43:21.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:20 vm03 ceph-mon[57160]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:43:21.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:20 vm03 ceph-mon[57160]: from='client.? v1:192.168.123.103:0/62193952' entity='client.admin' cmd='[{"prefix": "osd pool create", "pg_num": 16, "pool": "\u9ec5"}]': finished 2026-03-10T08:43:21.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:20 vm03 ceph-mon[57160]: osdmap e375: 8 total, 8 up, 8 in 2026-03-10T08:43:21.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:20 vm03 ceph-mon[50703]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:43:21.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:20 vm03 ceph-mon[50703]: from='client.? v1:192.168.123.103:0/62193952' entity='client.admin' cmd='[{"prefix": "osd pool create", "pg_num": 16, "pool": "\u9ec5"}]': finished 2026-03-10T08:43:21.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:20 vm03 ceph-mon[50703]: osdmap e375: 8 total, 8 up, 8 in 2026-03-10T08:43:22.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:21 vm06 ceph-mon[54477]: osdmap e376: 8 total, 8 up, 8 in 2026-03-10T08:43:22.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:21 vm06 ceph-mon[54477]: pgmap v528: 212 pgs: 48 unknown, 164 active+clean; 455 KiB data, 517 MiB used, 159 GiB / 160 GiB avail 2026-03-10T08:43:22.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:21 vm06 ceph-mon[54477]: osdmap e377: 8 total, 8 up, 8 in 2026-03-10T08:43:22.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:21 vm03 ceph-mon[57160]: osdmap e376: 8 total, 8 up, 8 in 2026-03-10T08:43:22.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:21 vm03 ceph-mon[57160]: pgmap v528: 212 pgs: 48 unknown, 164 active+clean; 455 KiB data, 517 MiB used, 159 GiB / 160 GiB avail 2026-03-10T08:43:22.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:21 vm03 ceph-mon[57160]: osdmap e377: 8 total, 8 up, 8 in 2026-03-10T08:43:22.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:21 vm03 ceph-mon[50703]: osdmap e376: 8 total, 8 up, 8 in 2026-03-10T08:43:22.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:21 vm03 ceph-mon[50703]: pgmap v528: 212 pgs: 48 unknown, 164 active+clean; 455 KiB data, 517 MiB used, 159 GiB / 160 GiB avail 2026-03-10T08:43:22.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:21 vm03 ceph-mon[50703]: osdmap e377: 8 total, 8 up, 8 in 2026-03-10T08:43:23.089 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:43:22 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a[78511]: debug there is no tcmu-runner data available 2026-03-10T08:43:23.752 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestWatchNotify::test PASSED [ 98%] 2026-03-10T08:43:24.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:23 vm06 ceph-mon[54477]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:43:24.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:23 vm06 ceph-mon[54477]: osdmap e378: 8 total, 8 up, 8 in 2026-03-10T08:43:24.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:23 vm06 ceph-mon[54477]: pgmap v531: 212 pgs: 212 active+clean; 455 KiB data, 518 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T08:43:24.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:23 vm03 ceph-mon[57160]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:43:24.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:23 vm03 ceph-mon[57160]: osdmap e378: 8 total, 8 up, 8 in 2026-03-10T08:43:24.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:23 vm03 ceph-mon[57160]: pgmap v531: 212 pgs: 212 active+clean; 455 KiB data, 518 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T08:43:24.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:23 vm03 ceph-mon[50703]: from='client.14580 v1:192.168.123.106:0/607962712' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T08:43:24.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:23 vm03 ceph-mon[50703]: osdmap e378: 8 total, 8 up, 8 in 2026-03-10T08:43:24.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:23 vm03 ceph-mon[50703]: pgmap v531: 212 pgs: 212 active+clean; 455 KiB data, 518 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T08:43:25.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:24 vm06 ceph-mon[54477]: osdmap e379: 8 total, 8 up, 8 in 2026-03-10T08:43:25.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:24 vm03 ceph-mon[57160]: osdmap e379: 8 total, 8 up, 8 in 2026-03-10T08:43:25.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:24 vm03 ceph-mon[50703]: osdmap e379: 8 total, 8 up, 8 in 2026-03-10T08:43:26.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:25 vm06 ceph-mon[54477]: osdmap e380: 8 total, 8 up, 8 in 2026-03-10T08:43:26.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:25 vm06 ceph-mon[54477]: pgmap v534: 212 pgs: 32 unknown, 180 active+clean; 455 KiB data, 518 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:43:26.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:25 vm03 ceph-mon[57160]: osdmap e380: 8 total, 8 up, 8 in 2026-03-10T08:43:26.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:25 vm03 ceph-mon[57160]: pgmap v534: 212 pgs: 32 unknown, 180 active+clean; 455 KiB data, 518 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:43:26.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:25 vm03 ceph-mon[50703]: osdmap e380: 8 total, 8 up, 8 in 2026-03-10T08:43:26.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:25 vm03 ceph-mon[50703]: pgmap v534: 212 pgs: 32 unknown, 180 active+clean; 455 KiB data, 518 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:43:27.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:26 vm06 ceph-mon[54477]: osdmap e381: 8 total, 8 up, 8 in 2026-03-10T08:43:27.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:26 vm06 ceph-mon[54477]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:43:27.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:26 vm03 ceph-mon[57160]: osdmap e381: 8 total, 8 up, 8 in 2026-03-10T08:43:27.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:26 vm03 ceph-mon[57160]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:43:27.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:26 vm03 ceph-mon[50703]: osdmap e381: 8 total, 8 up, 8 in 2026-03-10T08:43:27.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:26 vm03 ceph-mon[50703]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T08:43:27.795 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestWatchNotify::test_aio_notify PASSED [100%] 2026-03-10T08:43:27.795 INFO:tasks.workunit.client.0.vm03.stdout: 2026-03-10T08:43:27.795 INFO:tasks.workunit.client.0.vm03.stdout:=============================== warnings summary =============================== 2026-03-10T08:43:27.795 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py:210 2026-03-10T08:43:27.795 INFO:tasks.workunit.client.0.vm03.stdout: /home/ubuntu/cephtest/clone.client.0/src/test/pybind/test_rados.py:210: DeprecationWarning: invalid escape sequence \- 2026-03-10T08:43:27.795 INFO:tasks.workunit.client.0.vm03.stdout: assert re.match('[0-9a-f\-]{36}', fsid, re.I) 2026-03-10T08:43:27.795 INFO:tasks.workunit.client.0.vm03.stdout: 2026-03-10T08:43:27.795 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py:960 2026-03-10T08:43:27.795 INFO:tasks.workunit.client.0.vm03.stdout: /home/ubuntu/cephtest/clone.client.0/src/test/pybind/test_rados.py:960: PytestUnknownMarkWarning: Unknown pytest.mark.wait - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/mark.html 2026-03-10T08:43:27.795 INFO:tasks.workunit.client.0.vm03.stdout: @pytest.mark.wait 2026-03-10T08:43:27.795 INFO:tasks.workunit.client.0.vm03.stdout: 2026-03-10T08:43:27.795 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py:996 2026-03-10T08:43:27.795 INFO:tasks.workunit.client.0.vm03.stdout: /home/ubuntu/cephtest/clone.client.0/src/test/pybind/test_rados.py:996: PytestUnknownMarkWarning: Unknown pytest.mark.wait - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/mark.html 2026-03-10T08:43:27.795 INFO:tasks.workunit.client.0.vm03.stdout: @pytest.mark.wait 2026-03-10T08:43:27.795 INFO:tasks.workunit.client.0.vm03.stdout: 2026-03-10T08:43:27.795 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py:1024 2026-03-10T08:43:27.795 INFO:tasks.workunit.client.0.vm03.stdout: /home/ubuntu/cephtest/clone.client.0/src/test/pybind/test_rados.py:1024: PytestUnknownMarkWarning: Unknown pytest.mark.wait - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/mark.html 2026-03-10T08:43:27.795 INFO:tasks.workunit.client.0.vm03.stdout: @pytest.mark.wait 2026-03-10T08:43:27.795 INFO:tasks.workunit.client.0.vm03.stdout: 2026-03-10T08:43:27.795 INFO:tasks.workunit.client.0.vm03.stdout::210 2026-03-10T08:43:27.795 INFO:tasks.workunit.client.0.vm03.stdout::210 2026-03-10T08:43:27.795 INFO:tasks.workunit.client.0.vm03.stdout::210 2026-03-10T08:43:27.795 INFO:tasks.workunit.client.0.vm03.stdout::210 2026-03-10T08:43:27.795 INFO:tasks.workunit.client.0.vm03.stdout::210 2026-03-10T08:43:27.795 INFO:tasks.workunit.client.0.vm03.stdout::210 2026-03-10T08:43:27.795 INFO:tasks.workunit.client.0.vm03.stdout::210 2026-03-10T08:43:27.795 INFO:tasks.workunit.client.0.vm03.stdout::210 2026-03-10T08:43:27.795 INFO:tasks.workunit.client.0.vm03.stdout::210 2026-03-10T08:43:27.795 INFO:tasks.workunit.client.0.vm03.stdout: :210: DeprecationWarning: invalid escape sequence \- 2026-03-10T08:43:27.795 INFO:tasks.workunit.client.0.vm03.stdout: 2026-03-10T08:43:27.795 INFO:tasks.workunit.client.0.vm03.stdout:-- Docs: https://docs.pytest.org/en/stable/warnings.html 2026-03-10T08:43:27.795 INFO:tasks.workunit.client.0.vm03.stdout:================= 91 passed, 13 warnings in 333.12s (0:05:33) ================== 2026-03-10T08:43:27.809 INFO:tasks.workunit.client.0.vm03.stderr:+ exit 0 2026-03-10T08:43:27.809 INFO:teuthology.orchestra.run:Running command with timeout 3600 2026-03-10T08:43:27.809 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -rf -- /home/ubuntu/cephtest/mnt.0/client.0/tmp 2026-03-10T08:43:27.883 INFO:tasks.workunit:Stopping ['rados/test_python.sh'] on client.0... 2026-03-10T08:43:27.883 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -rf -- /home/ubuntu/cephtest/workunits.list.client.0 /home/ubuntu/cephtest/clone.client.0 2026-03-10T08:43:28.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:27 vm06 ceph-mon[54477]: osdmap e382: 8 total, 8 up, 8 in 2026-03-10T08:43:28.089 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:27 vm06 ceph-mon[54477]: pgmap v537: 212 pgs: 212 active+clean; 455 KiB data, 522 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:43:28.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:27 vm03 ceph-mon[50703]: osdmap e382: 8 total, 8 up, 8 in 2026-03-10T08:43:28.178 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:27 vm03 ceph-mon[50703]: pgmap v537: 212 pgs: 212 active+clean; 455 KiB data, 522 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:43:28.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:27 vm03 ceph-mon[57160]: osdmap e382: 8 total, 8 up, 8 in 2026-03-10T08:43:28.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:27 vm03 ceph-mon[57160]: pgmap v537: 212 pgs: 212 active+clean; 455 KiB data, 522 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T08:43:28.282 DEBUG:teuthology.parallel:result is None 2026-03-10T08:43:28.283 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -rf -- /home/ubuntu/cephtest/mnt.0/client.0 2026-03-10T08:43:28.304 INFO:tasks.workunit:Deleted dir /home/ubuntu/cephtest/mnt.0/client.0 2026-03-10T08:43:28.304 DEBUG:teuthology.orchestra.run.vm03:> rmdir -- /home/ubuntu/cephtest/mnt.0 2026-03-10T08:43:28.359 INFO:tasks.workunit:Deleted artificial mount point /home/ubuntu/cephtest/mnt.0/client.0 2026-03-10T08:43:28.359 DEBUG:teuthology.run_tasks:Unwinding manager cephadm 2026-03-10T08:43:28.361 INFO:tasks.cephadm:Teardown begin 2026-03-10T08:43:28.361 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T08:43:28.422 DEBUG:teuthology.orchestra.run.vm06:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T08:43:28.449 INFO:tasks.cephadm:Disabling cephadm mgr module 2026-03-10T08:43:28.449 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 -- ceph mgr module disable cephadm 2026-03-10T08:43:28.623 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/mon.a/config 2026-03-10T08:43:28.639 INFO:teuthology.orchestra.run.vm03.stderr:Error: statfs /etc/ceph/ceph.client.admin.keyring: no such file or directory 2026-03-10T08:43:28.658 DEBUG:teuthology.orchestra.run:got remote process result: 125 2026-03-10T08:43:28.658 INFO:tasks.cephadm:Cleaning up testdir ceph.* files... 2026-03-10T08:43:28.658 DEBUG:teuthology.orchestra.run.vm03:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-10T08:43:28.673 DEBUG:teuthology.orchestra.run.vm06:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-10T08:43:28.689 INFO:tasks.cephadm:Stopping all daemons... 2026-03-10T08:43:28.690 INFO:tasks.cephadm.mon.a:Stopping mon.a... 2026-03-10T08:43:28.690 DEBUG:teuthology.orchestra.run.vm03:> sudo systemctl stop ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@mon.a 2026-03-10T08:43:28.791 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:28 vm06 ceph-mon[54477]: osdmap e383: 8 total, 8 up, 8 in 2026-03-10T08:43:28.919 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:28 vm03 systemd[1]: Stopping Ceph mon.a for aaf0329a-1c5b-11f1-8b6f-7f2d819bb543... 2026-03-10T08:43:28.919 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:28 vm03 ceph-mon[50703]: osdmap e383: 8 total, 8 up, 8 in 2026-03-10T08:43:28.919 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:28 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mon-a[50699]: 2026-03-10T08:43:28.812+0000 7f8dc3336640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T08:43:28.919 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:28 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mon-a[50699]: 2026-03-10T08:43:28.812+0000 7f8dc3336640 -1 mon.a@0(leader) e3 *** Got Signal Terminated *** 2026-03-10T08:43:28.919 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:28 vm03 podman[89491]: 2026-03-10 08:43:28.84445137 +0000 UTC m=+0.045565270 container died 8042a210ce6ff0acc9683abf0fee51f83521f4c4c12e079392cda11b71572ef4 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mon-a, org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0) 2026-03-10T08:43:28.919 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:28 vm03 podman[89491]: 2026-03-10 08:43:28.864237127 +0000 UTC m=+0.065351027 container remove 8042a210ce6ff0acc9683abf0fee51f83521f4c4c12e079392cda11b71572ef4 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mon-a, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS) 2026-03-10T08:43:28.919 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 10 08:43:28 vm03 bash[89491]: ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mon-a 2026-03-10T08:43:28.920 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:28 vm03 ceph-mon[57160]: osdmap e383: 8 total, 8 up, 8 in 2026-03-10T08:43:28.931 DEBUG:teuthology.orchestra.run.vm03:> sudo pkill -f 'journalctl -f -n 0 -u ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@mon.a.service' 2026-03-10T08:43:28.977 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T08:43:28.978 INFO:tasks.cephadm.mon.a:Stopped mon.a 2026-03-10T08:43:28.978 INFO:tasks.cephadm.mon.b:Stopping mon.c... 2026-03-10T08:43:28.978 DEBUG:teuthology.orchestra.run.vm03:> sudo systemctl stop ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@mon.c 2026-03-10T08:43:29.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:29 vm03 systemd[1]: Stopping Ceph mon.c for aaf0329a-1c5b-11f1-8b6f-7f2d819bb543... 2026-03-10T08:43:29.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:29 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mon-c[57156]: 2026-03-10T08:43:29.118+0000 7f09e4e88640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.c -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T08:43:29.178 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 10 08:43:29 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mon-c[57156]: 2026-03-10T08:43:29.118+0000 7f09e4e88640 -1 mon.c@2(peon) e3 *** Got Signal Terminated *** 2026-03-10T08:43:29.263 DEBUG:teuthology.orchestra.run.vm03:> sudo pkill -f 'journalctl -f -n 0 -u ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@mon.c.service' 2026-03-10T08:43:29.294 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T08:43:29.294 INFO:tasks.cephadm.mon.b:Stopped mon.c 2026-03-10T08:43:29.294 INFO:tasks.cephadm.mon.b:Stopping mon.b... 2026-03-10T08:43:29.294 DEBUG:teuthology.orchestra.run.vm06:> sudo systemctl stop ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@mon.b 2026-03-10T08:43:29.527 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:43:29 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-y[50909]: ::ffff:192.168.123.106 - - [10/Mar/2026:08:43:29] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T08:43:29.579 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:29 vm06 systemd[1]: Stopping Ceph mon.b for aaf0329a-1c5b-11f1-8b6f-7f2d819bb543... 2026-03-10T08:43:29.579 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:29 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mon-b[54473]: 2026-03-10T08:43:29.500+0000 7fa42d52a640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.b -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T08:43:29.579 INFO:journalctl@ceph.mon.b.vm06.stdout:Mar 10 08:43:29 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mon-b[54473]: 2026-03-10T08:43:29.500+0000 7fa42d52a640 -1 mon.b@1(peon) e3 *** Got Signal Terminated *** 2026-03-10T08:43:29.667 DEBUG:teuthology.orchestra.run.vm06:> sudo pkill -f 'journalctl -f -n 0 -u ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@mon.b.service' 2026-03-10T08:43:29.700 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T08:43:29.700 INFO:tasks.cephadm.mon.b:Stopped mon.b 2026-03-10T08:43:29.700 INFO:tasks.cephadm.mgr.y:Stopping mgr.y... 2026-03-10T08:43:29.700 DEBUG:teuthology.orchestra.run.vm03:> sudo systemctl stop ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@mgr.y 2026-03-10T08:43:29.840 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 10 08:43:29 vm03 systemd[1]: Stopping Ceph mgr.y for aaf0329a-1c5b-11f1-8b6f-7f2d819bb543... 2026-03-10T08:43:29.932 DEBUG:teuthology.orchestra.run.vm03:> sudo pkill -f 'journalctl -f -n 0 -u ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@mgr.y.service' 2026-03-10T08:43:29.961 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T08:43:29.961 INFO:tasks.cephadm.mgr.y:Stopped mgr.y 2026-03-10T08:43:29.961 INFO:tasks.cephadm.mgr.x:Stopping mgr.x... 2026-03-10T08:43:29.961 DEBUG:teuthology.orchestra.run.vm06:> sudo systemctl stop ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@mgr.x 2026-03-10T08:43:30.096 INFO:journalctl@ceph.mgr.x.vm06.stdout:Mar 10 08:43:29 vm06 systemd[1]: Stopping Ceph mgr.x for aaf0329a-1c5b-11f1-8b6f-7f2d819bb543... 2026-03-10T08:43:30.096 INFO:journalctl@ceph.mgr.x.vm06.stdout:Mar 10 08:43:30 vm06 podman[83065]: 2026-03-10 08:43:30.083144796 +0000 UTC m=+0.041145491 container died d6911d108767f4bc0e5f823e6d631692cd6ed14db807952969c52eee5ec5aa04 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-mgr-x, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) 2026-03-10T08:43:30.169 DEBUG:teuthology.orchestra.run.vm06:> sudo pkill -f 'journalctl -f -n 0 -u ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@mgr.x.service' 2026-03-10T08:43:30.201 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T08:43:30.201 INFO:tasks.cephadm.mgr.x:Stopped mgr.x 2026-03-10T08:43:30.201 INFO:tasks.cephadm.osd.0:Stopping osd.0... 2026-03-10T08:43:30.201 DEBUG:teuthology.orchestra.run.vm03:> sudo systemctl stop ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@osd.0 2026-03-10T08:43:30.678 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 10 08:43:30 vm03 systemd[1]: Stopping Ceph osd.0 for aaf0329a-1c5b-11f1-8b6f-7f2d819bb543... 2026-03-10T08:43:30.678 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 10 08:43:30 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-0[61070]: 2026-03-10T08:43:30.295+0000 7f8c4d73f640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T08:43:30.678 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 10 08:43:30 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-0[61070]: 2026-03-10T08:43:30.295+0000 7f8c4d73f640 -1 osd.0 383 *** Got signal Terminated *** 2026-03-10T08:43:30.678 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 10 08:43:30 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-0[61070]: 2026-03-10T08:43:30.295+0000 7f8c4d73f640 -1 osd.0 383 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T08:43:35.625 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 10 08:43:35 vm03 podman[89804]: 2026-03-10 08:43:35.32493499 +0000 UTC m=+5.042398884 container died cc75f60941fea30914c7e3e02db46b6edbff956831b2e7da73fb87abf5454d44 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default, org.label-schema.vendor=CentOS) 2026-03-10T08:43:35.625 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 10 08:43:35 vm03 podman[89804]: 2026-03-10 08:43:35.344902415 +0000 UTC m=+5.062366299 container remove cc75f60941fea30914c7e3e02db46b6edbff956831b2e7da73fb87abf5454d44 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-10T08:43:35.625 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 10 08:43:35 vm03 bash[89804]: ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-0 2026-03-10T08:43:35.625 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 10 08:43:35 vm03 podman[89869]: 2026-03-10 08:43:35.461900195 +0000 UTC m=+0.014870416 container create 402cfa236c3f916cb13cf2a55e85741d824aa980a13b45e6865566fde6ac7dd3 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-0-deactivate, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223) 2026-03-10T08:43:35.625 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 10 08:43:35 vm03 podman[89869]: 2026-03-10 08:43:35.505916256 +0000 UTC m=+0.058886477 container init 402cfa236c3f916cb13cf2a55e85741d824aa980a13b45e6865566fde6ac7dd3 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-0-deactivate, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-10T08:43:35.625 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 10 08:43:35 vm03 podman[89869]: 2026-03-10 08:43:35.508510441 +0000 UTC m=+0.061480653 container start 402cfa236c3f916cb13cf2a55e85741d824aa980a13b45e6865566fde6ac7dd3 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-0-deactivate, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-10T08:43:35.625 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 10 08:43:35 vm03 podman[89869]: 2026-03-10 08:43:35.509467653 +0000 UTC m=+0.062437874 container attach 402cfa236c3f916cb13cf2a55e85741d824aa980a13b45e6865566fde6ac7dd3 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-0-deactivate, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, CEPH_REF=squid) 2026-03-10T08:43:35.625 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 10 08:43:35 vm03 podman[89869]: 2026-03-10 08:43:35.455628927 +0000 UTC m=+0.008599148 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T08:43:35.652 DEBUG:teuthology.orchestra.run.vm03:> sudo pkill -f 'journalctl -f -n 0 -u ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@osd.0.service' 2026-03-10T08:43:35.682 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T08:43:35.682 INFO:tasks.cephadm.osd.0:Stopped osd.0 2026-03-10T08:43:35.682 INFO:tasks.cephadm.osd.1:Stopping osd.1... 2026-03-10T08:43:35.682 DEBUG:teuthology.orchestra.run.vm03:> sudo systemctl stop ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@osd.1 2026-03-10T08:43:35.928 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 10 08:43:35 vm03 systemd[1]: Stopping Ceph osd.1 for aaf0329a-1c5b-11f1-8b6f-7f2d819bb543... 2026-03-10T08:43:35.928 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 10 08:43:35 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-1[65947]: 2026-03-10T08:43:35.825+0000 7fd9bc5f8640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.1 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T08:43:35.928 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 10 08:43:35 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-1[65947]: 2026-03-10T08:43:35.825+0000 7fd9bc5f8640 -1 osd.1 383 *** Got signal Terminated *** 2026-03-10T08:43:35.928 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 10 08:43:35 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-1[65947]: 2026-03-10T08:43:35.825+0000 7fd9bc5f8640 -1 osd.1 383 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T08:43:41.178 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 10 08:43:40 vm03 podman[89965]: 2026-03-10 08:43:40.863572967 +0000 UTC m=+5.057939524 container died 53076c7c3996ab41446133ffe9194086d8e3534a1499200230e78b4459901962 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-1, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.build-date=20260223, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-10T08:43:41.178 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 10 08:43:41 vm03 podman[89965]: 2026-03-10 08:43:41.059163147 +0000 UTC m=+5.253529704 container remove 53076c7c3996ab41446133ffe9194086d8e3534a1499200230e78b4459901962 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default) 2026-03-10T08:43:41.178 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 10 08:43:41 vm03 bash[89965]: ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-1 2026-03-10T08:43:41.442 DEBUG:teuthology.orchestra.run.vm03:> sudo pkill -f 'journalctl -f -n 0 -u ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@osd.1.service' 2026-03-10T08:43:41.468 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 10 08:43:41 vm03 podman[90045]: 2026-03-10 08:43:41.20611201 +0000 UTC m=+0.026983168 container create 2363118812ff57cc9dd460fa3883ffd05dbf448d8bfd209d94df0b16a0945c7a (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-1-deactivate, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-10T08:43:41.468 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 10 08:43:41 vm03 podman[90045]: 2026-03-10 08:43:41.190062459 +0000 UTC m=+0.010933617 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T08:43:41.468 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 10 08:43:41 vm03 podman[90045]: 2026-03-10 08:43:41.289903714 +0000 UTC m=+0.110774872 container init 2363118812ff57cc9dd460fa3883ffd05dbf448d8bfd209d94df0b16a0945c7a (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-1-deactivate, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, OSD_FLAVOR=default) 2026-03-10T08:43:41.468 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 10 08:43:41 vm03 podman[90045]: 2026-03-10 08:43:41.292963141 +0000 UTC m=+0.113834299 container start 2363118812ff57cc9dd460fa3883ffd05dbf448d8bfd209d94df0b16a0945c7a (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-1-deactivate, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid) 2026-03-10T08:43:41.468 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 10 08:43:41 vm03 podman[90045]: 2026-03-10 08:43:41.29392436 +0000 UTC m=+0.114795518 container attach 2363118812ff57cc9dd460fa3883ffd05dbf448d8bfd209d94df0b16a0945c7a (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-1-deactivate, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True) 2026-03-10T08:43:41.468 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 10 08:43:41 vm03 conmon[90056]: conmon 2363118812ff57cc9dd4 : Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2363118812ff57cc9dd460fa3883ffd05dbf448d8bfd209d94df0b16a0945c7a.scope/container/memory.events 2026-03-10T08:43:41.468 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 10 08:43:41 vm03 podman[90045]: 2026-03-10 08:43:41.41197995 +0000 UTC m=+0.232851108 container died 2363118812ff57cc9dd460fa3883ffd05dbf448d8bfd209d94df0b16a0945c7a (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-1-deactivate, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True) 2026-03-10T08:43:41.468 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 10 08:43:41 vm03 podman[90045]: 2026-03-10 08:43:41.430135735 +0000 UTC m=+0.251006893 container remove 2363118812ff57cc9dd460fa3883ffd05dbf448d8bfd209d94df0b16a0945c7a (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-1-deactivate, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS) 2026-03-10T08:43:41.468 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 10 08:43:41 vm03 systemd[1]: ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@osd.1.service: Deactivated successfully. 2026-03-10T08:43:41.468 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 10 08:43:41 vm03 systemd[1]: Stopped Ceph osd.1 for aaf0329a-1c5b-11f1-8b6f-7f2d819bb543. 2026-03-10T08:43:41.468 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 10 08:43:41 vm03 systemd[1]: ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@osd.1.service: Consumed 6.556s CPU time. 2026-03-10T08:43:41.479 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T08:43:41.479 INFO:tasks.cephadm.osd.1:Stopped osd.1 2026-03-10T08:43:41.479 INFO:tasks.cephadm.osd.2:Stopping osd.2... 2026-03-10T08:43:41.479 DEBUG:teuthology.orchestra.run.vm03:> sudo systemctl stop ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@osd.2 2026-03-10T08:43:41.928 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 10 08:43:41 vm03 systemd[1]: Stopping Ceph osd.2 for aaf0329a-1c5b-11f1-8b6f-7f2d819bb543... 2026-03-10T08:43:41.928 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 10 08:43:41 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-2[71200]: 2026-03-10T08:43:41.619+0000 7f2191880640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.2 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T08:43:41.928 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 10 08:43:41 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-2[71200]: 2026-03-10T08:43:41.619+0000 7f2191880640 -1 osd.2 383 *** Got signal Terminated *** 2026-03-10T08:43:41.928 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 10 08:43:41 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-2[71200]: 2026-03-10T08:43:41.619+0000 7f2191880640 -1 osd.2 383 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T08:43:46.928 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 10 08:43:46 vm03 podman[90142]: 2026-03-10 08:43:46.650706313 +0000 UTC m=+5.043836584 container died 64317d700e6d57689b461b02ffa65445b5ea198e0febcfd4928202ace11f6e85 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-10T08:43:46.928 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 10 08:43:46 vm03 podman[90142]: 2026-03-10 08:43:46.678128081 +0000 UTC m=+5.071258342 container remove 64317d700e6d57689b461b02ffa65445b5ea198e0febcfd4928202ace11f6e85 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default, org.label-schema.build-date=20260223) 2026-03-10T08:43:46.928 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 10 08:43:46 vm03 bash[90142]: ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-2 2026-03-10T08:43:46.928 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 10 08:43:46 vm03 podman[90208]: 2026-03-10 08:43:46.806241317 +0000 UTC m=+0.014579502 container create af5ea22a4fe8a2b5525b6931efc0e3ff675c872599d5d12932815c02fff3b1ab (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-2-deactivate, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS) 2026-03-10T08:43:46.928 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 10 08:43:46 vm03 podman[90208]: 2026-03-10 08:43:46.841904342 +0000 UTC m=+0.050242526 container init af5ea22a4fe8a2b5525b6931efc0e3ff675c872599d5d12932815c02fff3b1ab (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-2-deactivate, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-10T08:43:46.928 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 10 08:43:46 vm03 podman[90208]: 2026-03-10 08:43:46.844425813 +0000 UTC m=+0.052763997 container start af5ea22a4fe8a2b5525b6931efc0e3ff675c872599d5d12932815c02fff3b1ab (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-2-deactivate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-10T08:43:46.928 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 10 08:43:46 vm03 podman[90208]: 2026-03-10 08:43:46.849421042 +0000 UTC m=+0.057759216 container attach af5ea22a4fe8a2b5525b6931efc0e3ff675c872599d5d12932815c02fff3b1ab (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-2-deactivate, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-10T08:43:46.928 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 10 08:43:46 vm03 podman[90208]: 2026-03-10 08:43:46.800223753 +0000 UTC m=+0.008561937 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T08:43:47.008 DEBUG:teuthology.orchestra.run.vm03:> sudo pkill -f 'journalctl -f -n 0 -u ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@osd.2.service' 2026-03-10T08:43:47.038 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T08:43:47.038 INFO:tasks.cephadm.osd.2:Stopped osd.2 2026-03-10T08:43:47.038 INFO:tasks.cephadm.osd.3:Stopping osd.3... 2026-03-10T08:43:47.038 DEBUG:teuthology.orchestra.run.vm03:> sudo systemctl stop ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@osd.3 2026-03-10T08:43:47.428 INFO:journalctl@ceph.osd.3.vm03.stdout:Mar 10 08:43:47 vm03 systemd[1]: Stopping Ceph osd.3 for aaf0329a-1c5b-11f1-8b6f-7f2d819bb543... 2026-03-10T08:43:47.428 INFO:journalctl@ceph.osd.3.vm03.stdout:Mar 10 08:43:47 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-3[76397]: 2026-03-10T08:43:47.168+0000 7f8dbd4f3640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.3 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T08:43:47.428 INFO:journalctl@ceph.osd.3.vm03.stdout:Mar 10 08:43:47 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-3[76397]: 2026-03-10T08:43:47.168+0000 7f8dbd4f3640 -1 osd.3 383 *** Got signal Terminated *** 2026-03-10T08:43:47.428 INFO:journalctl@ceph.osd.3.vm03.stdout:Mar 10 08:43:47 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-3[76397]: 2026-03-10T08:43:47.168+0000 7f8dbd4f3640 -1 osd.3 383 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T08:43:52.505 INFO:journalctl@ceph.osd.3.vm03.stdout:Mar 10 08:43:52 vm03 podman[90306]: 2026-03-10 08:43:52.201913917 +0000 UTC m=+5.045890620 container died c8f12ee6b836d046adcd92f122145775eff47de722273a3d071011c2c8861236 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2) 2026-03-10T08:43:52.505 INFO:journalctl@ceph.osd.3.vm03.stdout:Mar 10 08:43:52 vm03 podman[90306]: 2026-03-10 08:43:52.22679466 +0000 UTC m=+5.070771352 container remove c8f12ee6b836d046adcd92f122145775eff47de722273a3d071011c2c8861236 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-3, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223) 2026-03-10T08:43:52.505 INFO:journalctl@ceph.osd.3.vm03.stdout:Mar 10 08:43:52 vm03 bash[90306]: ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-3 2026-03-10T08:43:52.505 INFO:journalctl@ceph.osd.3.vm03.stdout:Mar 10 08:43:52 vm03 podman[90375]: 2026-03-10 08:43:52.355764138 +0000 UTC m=+0.014379467 container create 203a14146b2580b25c1aa36c74e1b79a48a102b797a200dfb4a8eaf308567234 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-3-deactivate, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, org.label-schema.build-date=20260223) 2026-03-10T08:43:52.505 INFO:journalctl@ceph.osd.3.vm03.stdout:Mar 10 08:43:52 vm03 podman[90375]: 2026-03-10 08:43:52.386957668 +0000 UTC m=+0.045572996 container init 203a14146b2580b25c1aa36c74e1b79a48a102b797a200dfb4a8eaf308567234 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-3-deactivate, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-10T08:43:52.505 INFO:journalctl@ceph.osd.3.vm03.stdout:Mar 10 08:43:52 vm03 podman[90375]: 2026-03-10 08:43:52.390813004 +0000 UTC m=+0.049428322 container start 203a14146b2580b25c1aa36c74e1b79a48a102b797a200dfb4a8eaf308567234 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-3-deactivate, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.build-date=20260223, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-10T08:43:52.506 INFO:journalctl@ceph.osd.3.vm03.stdout:Mar 10 08:43:52 vm03 podman[90375]: 2026-03-10 08:43:52.391706607 +0000 UTC m=+0.050321935 container attach 203a14146b2580b25c1aa36c74e1b79a48a102b797a200dfb4a8eaf308567234 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-3-deactivate, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) 2026-03-10T08:43:52.506 INFO:journalctl@ceph.osd.3.vm03.stdout:Mar 10 08:43:52 vm03 podman[90375]: 2026-03-10 08:43:52.349900093 +0000 UTC m=+0.008515421 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T08:43:52.506 INFO:journalctl@ceph.osd.3.vm03.stdout:Mar 10 08:43:52 vm03 conmon[90387]: conmon 203a14146b2580b25c1a : Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-203a14146b2580b25c1aa36c74e1b79a48a102b797a200dfb4a8eaf308567234.scope/container/memory.events 2026-03-10T08:43:52.547 DEBUG:teuthology.orchestra.run.vm03:> sudo pkill -f 'journalctl -f -n 0 -u ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@osd.3.service' 2026-03-10T08:43:52.575 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T08:43:52.575 INFO:tasks.cephadm.osd.3:Stopped osd.3 2026-03-10T08:43:52.575 INFO:tasks.cephadm.osd.4:Stopping osd.4... 2026-03-10T08:43:52.575 DEBUG:teuthology.orchestra.run.vm06:> sudo systemctl stop ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@osd.4 2026-03-10T08:43:53.089 INFO:journalctl@ceph.osd.4.vm06.stdout:Mar 10 08:43:52 vm06 systemd[1]: Stopping Ceph osd.4 for aaf0329a-1c5b-11f1-8b6f-7f2d819bb543... 2026-03-10T08:43:53.089 INFO:journalctl@ceph.osd.4.vm06.stdout:Mar 10 08:43:52 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-4[59063]: 2026-03-10T08:43:52.671+0000 7fe825b78640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.4 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T08:43:53.089 INFO:journalctl@ceph.osd.4.vm06.stdout:Mar 10 08:43:52 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-4[59063]: 2026-03-10T08:43:52.671+0000 7fe825b78640 -1 osd.4 383 *** Got signal Terminated *** 2026-03-10T08:43:53.089 INFO:journalctl@ceph.osd.4.vm06.stdout:Mar 10 08:43:52 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-4[59063]: 2026-03-10T08:43:52.671+0000 7fe825b78640 -1 osd.4 383 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T08:43:53.589 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:43:53 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a[81473]: ts=2026-03-10T08:43:53.206Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=ceph-exporter msg="Unable to refresh target groups" err="Get \"http://192.168.123.103:8765/sd/prometheus/sd-config?service=ceph-exporter\": dial tcp 192.168.123.103:8765: connect: connection refused" 2026-03-10T08:43:53.589 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:43:53 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a[81473]: ts=2026-03-10T08:43:53.209Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=node msg="Unable to refresh target groups" err="Get \"http://192.168.123.103:8765/sd/prometheus/sd-config?service=node-exporter\": dial tcp 192.168.123.103:8765: connect: connection refused" 2026-03-10T08:43:53.589 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:43:53 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a[81473]: ts=2026-03-10T08:43:53.210Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=nfs msg="Unable to refresh target groups" err="Get \"http://192.168.123.103:8765/sd/prometheus/sd-config?service=nfs\": dial tcp 192.168.123.103:8765: connect: connection refused" 2026-03-10T08:43:53.589 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:43:53 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a[81473]: ts=2026-03-10T08:43:53.212Z caller=refresh.go:90 level=error component="discovery manager notify" discovery=http config=config-0 msg="Unable to refresh target groups" err="Get \"http://192.168.123.103:8765/sd/prometheus/sd-config?service=alertmanager\": dial tcp 192.168.123.103:8765: connect: connection refused" 2026-03-10T08:43:53.589 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:43:53 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a[81473]: ts=2026-03-10T08:43:53.212Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=ceph msg="Unable to refresh target groups" err="Get \"http://192.168.123.103:8765/sd/prometheus/sd-config?service=mgr-prometheus\": dial tcp 192.168.123.103:8765: connect: connection refused" 2026-03-10T08:43:53.589 INFO:journalctl@ceph.prometheus.a.vm06.stdout:Mar 10 08:43:53 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-prometheus-a[81473]: ts=2026-03-10T08:43:53.212Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=nvmeof msg="Unable to refresh target groups" err="Get \"http://192.168.123.103:8765/sd/prometheus/sd-config?service=nvmeof\": dial tcp 192.168.123.103:8765: connect: connection refused" 2026-03-10T08:43:57.089 INFO:journalctl@ceph.osd.6.vm06.stdout:Mar 10 08:43:56 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-6[69225]: 2026-03-10T08:43:56.782+0000 7fb03b0f3640 -1 osd.6 383 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-10T08:43:31.184539+0000 front 2026-03-10T08:43:31.184467+0000 (oldest deadline 2026-03-10T08:43:55.884073+0000) 2026-03-10T08:43:57.089 INFO:journalctl@ceph.osd.7.vm06.stdout:Mar 10 08:43:56 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-7[74328]: 2026-03-10T08:43:56.932+0000 7f7889425640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-10T08:43:31.965856+0000 front 2026-03-10T08:43:31.965803+0000 (oldest deadline 2026-03-10T08:43:56.065348+0000) 2026-03-10T08:43:57.966 INFO:journalctl@ceph.osd.6.vm06.stdout:Mar 10 08:43:57 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-6[69225]: 2026-03-10T08:43:57.772+0000 7fb03b0f3640 -1 osd.6 383 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-10T08:43:31.184539+0000 front 2026-03-10T08:43:31.184467+0000 (oldest deadline 2026-03-10T08:43:55.884073+0000) 2026-03-10T08:43:57.967 INFO:journalctl@ceph.osd.4.vm06.stdout:Mar 10 08:43:57 vm06 podman[83184]: 2026-03-10 08:43:57.711923318 +0000 UTC m=+5.050564234 container died ce2722250a2358e5a1189228c8783f44a227c8150f746057ff0a71ceaaedcdbf (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-4, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS) 2026-03-10T08:43:57.967 INFO:journalctl@ceph.osd.4.vm06.stdout:Mar 10 08:43:57 vm06 podman[83184]: 2026-03-10 08:43:57.734112703 +0000 UTC m=+5.072753630 container remove ce2722250a2358e5a1189228c8783f44a227c8150f746057ff0a71ceaaedcdbf (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-4, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) 2026-03-10T08:43:57.967 INFO:journalctl@ceph.osd.4.vm06.stdout:Mar 10 08:43:57 vm06 bash[83184]: ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-4 2026-03-10T08:43:57.967 INFO:journalctl@ceph.osd.4.vm06.stdout:Mar 10 08:43:57 vm06 podman[83253]: 2026-03-10 08:43:57.857828154 +0000 UTC m=+0.015313777 container create 96e99fa4a522f79c5bbdf324d2b4571649cd986052b9536193d98cc9dd99222d (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-4-deactivate, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0) 2026-03-10T08:43:57.967 INFO:journalctl@ceph.osd.4.vm06.stdout:Mar 10 08:43:57 vm06 podman[83253]: 2026-03-10 08:43:57.894206939 +0000 UTC m=+0.051692553 container init 96e99fa4a522f79c5bbdf324d2b4571649cd986052b9536193d98cc9dd99222d (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-4-deactivate, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.41.3) 2026-03-10T08:43:57.967 INFO:journalctl@ceph.osd.4.vm06.stdout:Mar 10 08:43:57 vm06 podman[83253]: 2026-03-10 08:43:57.898369842 +0000 UTC m=+0.055855465 container start 96e99fa4a522f79c5bbdf324d2b4571649cd986052b9536193d98cc9dd99222d (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-4-deactivate, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-10T08:43:57.967 INFO:journalctl@ceph.osd.4.vm06.stdout:Mar 10 08:43:57 vm06 podman[83253]: 2026-03-10 08:43:57.899255009 +0000 UTC m=+0.056740632 container attach 96e99fa4a522f79c5bbdf324d2b4571649cd986052b9536193d98cc9dd99222d (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-4-deactivate, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-10T08:43:57.967 INFO:journalctl@ceph.osd.4.vm06.stdout:Mar 10 08:43:57 vm06 podman[83253]: 2026-03-10 08:43:57.851487634 +0000 UTC m=+0.008973267 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T08:43:58.065 DEBUG:teuthology.orchestra.run.vm06:> sudo pkill -f 'journalctl -f -n 0 -u ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@osd.4.service' 2026-03-10T08:43:58.095 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T08:43:58.096 INFO:tasks.cephadm.osd.4:Stopped osd.4 2026-03-10T08:43:58.096 INFO:tasks.cephadm.osd.5:Stopping osd.5... 2026-03-10T08:43:58.096 DEBUG:teuthology.orchestra.run.vm06:> sudo systemctl stop ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@osd.5 2026-03-10T08:43:58.228 INFO:journalctl@ceph.osd.5.vm06.stdout:Mar 10 08:43:58 vm06 systemd[1]: Stopping Ceph osd.5 for aaf0329a-1c5b-11f1-8b6f-7f2d819bb543... 2026-03-10T08:43:58.228 INFO:journalctl@ceph.osd.7.vm06.stdout:Mar 10 08:43:57 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-7[74328]: 2026-03-10T08:43:57.964+0000 7f7889425640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-10T08:43:31.965856+0000 front 2026-03-10T08:43:31.965803+0000 (oldest deadline 2026-03-10T08:43:56.065348+0000) 2026-03-10T08:43:58.589 INFO:journalctl@ceph.osd.5.vm06.stdout:Mar 10 08:43:58 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-5[64235]: 2026-03-10T08:43:58.226+0000 7fbfc182a640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.5 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T08:43:58.589 INFO:journalctl@ceph.osd.5.vm06.stdout:Mar 10 08:43:58 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-5[64235]: 2026-03-10T08:43:58.226+0000 7fbfc182a640 -1 osd.5 383 *** Got signal Terminated *** 2026-03-10T08:43:58.589 INFO:journalctl@ceph.osd.5.vm06.stdout:Mar 10 08:43:58 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-5[64235]: 2026-03-10T08:43:58.226+0000 7fbfc182a640 -1 osd.5 383 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T08:43:59.089 INFO:journalctl@ceph.osd.6.vm06.stdout:Mar 10 08:43:58 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-6[69225]: 2026-03-10T08:43:58.761+0000 7fb03b0f3640 -1 osd.6 383 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-10T08:43:31.184539+0000 front 2026-03-10T08:43:31.184467+0000 (oldest deadline 2026-03-10T08:43:55.884073+0000) 2026-03-10T08:43:59.089 INFO:journalctl@ceph.osd.7.vm06.stdout:Mar 10 08:43:58 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-7[74328]: 2026-03-10T08:43:58.998+0000 7f7889425640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-10T08:43:31.965856+0000 front 2026-03-10T08:43:31.965803+0000 (oldest deadline 2026-03-10T08:43:56.065348+0000) 2026-03-10T08:44:00.089 INFO:journalctl@ceph.osd.6.vm06.stdout:Mar 10 08:43:59 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-6[69225]: 2026-03-10T08:43:59.719+0000 7fb03b0f3640 -1 osd.6 383 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-10T08:43:31.184539+0000 front 2026-03-10T08:43:31.184467+0000 (oldest deadline 2026-03-10T08:43:55.884073+0000) 2026-03-10T08:44:00.089 INFO:journalctl@ceph.osd.7.vm06.stdout:Mar 10 08:43:59 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-7[74328]: 2026-03-10T08:43:59.964+0000 7f7889425640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-10T08:43:31.965856+0000 front 2026-03-10T08:43:31.965803+0000 (oldest deadline 2026-03-10T08:43:56.065348+0000) 2026-03-10T08:44:00.403 INFO:journalctl@ceph.osd.5.vm06.stdout:Mar 10 08:44:00 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-5[64235]: 2026-03-10T08:44:00.397+0000 7fbfbde43640 -1 osd.5 383 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-10T08:43:35.219949+0000 front 2026-03-10T08:43:35.219934+0000 (oldest deadline 2026-03-10T08:43:59.919543+0000) 2026-03-10T08:44:01.007 INFO:journalctl@ceph.osd.6.vm06.stdout:Mar 10 08:44:00 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-6[69225]: 2026-03-10T08:44:00.754+0000 7fb03b0f3640 -1 osd.6 383 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-10T08:43:31.184539+0000 front 2026-03-10T08:43:31.184467+0000 (oldest deadline 2026-03-10T08:43:55.884073+0000) 2026-03-10T08:44:01.340 INFO:journalctl@ceph.osd.7.vm06.stdout:Mar 10 08:44:01 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-7[74328]: 2026-03-10T08:44:01.005+0000 7f7889425640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-10T08:43:31.965856+0000 front 2026-03-10T08:43:31.965803+0000 (oldest deadline 2026-03-10T08:43:56.065348+0000) 2026-03-10T08:44:01.783 INFO:journalctl@ceph.osd.5.vm06.stdout:Mar 10 08:44:01 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-5[64235]: 2026-03-10T08:44:01.391+0000 7fbfbde43640 -1 osd.5 383 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-10T08:43:35.219949+0000 front 2026-03-10T08:43:35.219934+0000 (oldest deadline 2026-03-10T08:43:59.919543+0000) 2026-03-10T08:44:02.089 INFO:journalctl@ceph.osd.6.vm06.stdout:Mar 10 08:44:01 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-6[69225]: 2026-03-10T08:44:01.780+0000 7fb03b0f3640 -1 osd.6 383 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-10T08:43:31.184539+0000 front 2026-03-10T08:43:31.184467+0000 (oldest deadline 2026-03-10T08:43:55.884073+0000) 2026-03-10T08:44:02.089 INFO:journalctl@ceph.osd.7.vm06.stdout:Mar 10 08:44:01 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-7[74328]: 2026-03-10T08:44:01.983+0000 7f7889425640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-10T08:43:31.965856+0000 front 2026-03-10T08:43:31.965803+0000 (oldest deadline 2026-03-10T08:43:56.065348+0000) 2026-03-10T08:44:02.089 INFO:journalctl@ceph.osd.7.vm06.stdout:Mar 10 08:44:01 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-7[74328]: 2026-03-10T08:44:01.983+0000 7f7889425640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.103:6807 osd.1 since back 2026-03-10T08:43:36.065894+0000 front 2026-03-10T08:43:36.065999+0000 (oldest deadline 2026-03-10T08:44:01.365648+0000) 2026-03-10T08:44:02.772 INFO:journalctl@ceph.osd.5.vm06.stdout:Mar 10 08:44:02 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-5[64235]: 2026-03-10T08:44:02.384+0000 7fbfbde43640 -1 osd.5 383 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-10T08:43:35.219949+0000 front 2026-03-10T08:43:35.219934+0000 (oldest deadline 2026-03-10T08:43:59.919543+0000) 2026-03-10T08:44:02.772 INFO:journalctl@ceph.osd.5.vm06.stdout:Mar 10 08:44:02 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-5[64235]: 2026-03-10T08:44:02.384+0000 7fbfbde43640 -1 osd.5 383 heartbeat_check: no reply from 192.168.123.103:6807 osd.1 since back 2026-03-10T08:43:40.420145+0000 front 2026-03-10T08:43:40.420344+0000 (oldest deadline 2026-03-10T08:44:01.519987+0000) 2026-03-10T08:44:03.089 INFO:journalctl@ceph.osd.6.vm06.stdout:Mar 10 08:44:02 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-6[69225]: 2026-03-10T08:44:02.771+0000 7fb03b0f3640 -1 osd.6 383 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-10T08:43:31.184539+0000 front 2026-03-10T08:43:31.184467+0000 (oldest deadline 2026-03-10T08:43:55.884073+0000) 2026-03-10T08:44:03.089 INFO:journalctl@ceph.osd.7.vm06.stdout:Mar 10 08:44:02 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-7[74328]: 2026-03-10T08:44:02.981+0000 7f7889425640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-10T08:43:31.965856+0000 front 2026-03-10T08:43:31.965803+0000 (oldest deadline 2026-03-10T08:43:56.065348+0000) 2026-03-10T08:44:03.089 INFO:journalctl@ceph.osd.7.vm06.stdout:Mar 10 08:44:02 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-7[74328]: 2026-03-10T08:44:02.981+0000 7f7889425640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.103:6807 osd.1 since back 2026-03-10T08:43:36.065894+0000 front 2026-03-10T08:43:36.065999+0000 (oldest deadline 2026-03-10T08:44:01.365648+0000) 2026-03-10T08:44:03.515 INFO:journalctl@ceph.osd.5.vm06.stdout:Mar 10 08:44:03 vm06 podman[83351]: 2026-03-10 08:44:03.264437138 +0000 UTC m=+5.049626470 container died 3d56776bd57b4f8c8e278a52088be10db41a6491c3f8a7d8413feccda11316b9 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-5, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2) 2026-03-10T08:44:03.515 INFO:journalctl@ceph.osd.5.vm06.stdout:Mar 10 08:44:03 vm06 podman[83351]: 2026-03-10 08:44:03.289323632 +0000 UTC m=+5.074512974 container remove 3d56776bd57b4f8c8e278a52088be10db41a6491c3f8a7d8413feccda11316b9 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-5, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2) 2026-03-10T08:44:03.515 INFO:journalctl@ceph.osd.5.vm06.stdout:Mar 10 08:44:03 vm06 bash[83351]: ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-5 2026-03-10T08:44:03.515 INFO:journalctl@ceph.osd.5.vm06.stdout:Mar 10 08:44:03 vm06 podman[83437]: 2026-03-10 08:44:03.422143905 +0000 UTC m=+0.015771363 container create a5f7a7e3a7f84a6d0762f339e9bfbe825d3357ec6da50932e7e7669a35418d81 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-5-deactivate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0) 2026-03-10T08:44:03.515 INFO:journalctl@ceph.osd.5.vm06.stdout:Mar 10 08:44:03 vm06 podman[83437]: 2026-03-10 08:44:03.462468728 +0000 UTC m=+0.056096176 container init a5f7a7e3a7f84a6d0762f339e9bfbe825d3357ec6da50932e7e7669a35418d81 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-5-deactivate, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default) 2026-03-10T08:44:03.515 INFO:journalctl@ceph.osd.5.vm06.stdout:Mar 10 08:44:03 vm06 podman[83437]: 2026-03-10 08:44:03.465075738 +0000 UTC m=+0.058703196 container start a5f7a7e3a7f84a6d0762f339e9bfbe825d3357ec6da50932e7e7669a35418d81 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-5-deactivate, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, OSD_FLAVOR=default) 2026-03-10T08:44:03.515 INFO:journalctl@ceph.osd.5.vm06.stdout:Mar 10 08:44:03 vm06 podman[83437]: 2026-03-10 08:44:03.469637717 +0000 UTC m=+0.063265175 container attach a5f7a7e3a7f84a6d0762f339e9bfbe825d3357ec6da50932e7e7669a35418d81 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-5-deactivate, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-10T08:44:03.515 INFO:journalctl@ceph.osd.5.vm06.stdout:Mar 10 08:44:03 vm06 podman[83437]: 2026-03-10 08:44:03.415408326 +0000 UTC m=+0.009035795 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T08:44:03.612 DEBUG:teuthology.orchestra.run.vm06:> sudo pkill -f 'journalctl -f -n 0 -u ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@osd.5.service' 2026-03-10T08:44:03.641 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T08:44:03.642 INFO:tasks.cephadm.osd.5:Stopped osd.5 2026-03-10T08:44:03.642 INFO:tasks.cephadm.osd.6:Stopping osd.6... 2026-03-10T08:44:03.642 DEBUG:teuthology.orchestra.run.vm06:> sudo systemctl stop ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@osd.6 2026-03-10T08:44:03.777 INFO:journalctl@ceph.osd.6.vm06.stdout:Mar 10 08:44:03 vm06 systemd[1]: Stopping Ceph osd.6 for aaf0329a-1c5b-11f1-8b6f-7f2d819bb543... 2026-03-10T08:44:03.777 INFO:journalctl@ceph.osd.6.vm06.stdout:Mar 10 08:44:03 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-6[69225]: 2026-03-10T08:44:03.723+0000 7fb03b0f3640 -1 osd.6 383 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-10T08:43:31.184539+0000 front 2026-03-10T08:43:31.184467+0000 (oldest deadline 2026-03-10T08:43:55.884073+0000) 2026-03-10T08:44:03.777 INFO:journalctl@ceph.osd.6.vm06.stdout:Mar 10 08:44:03 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-6[69225]: 2026-03-10T08:44:03.723+0000 7fb03b0f3640 -1 osd.6 383 heartbeat_check: no reply from 192.168.123.103:6807 osd.1 since back 2026-03-10T08:43:38.184890+0000 front 2026-03-10T08:43:38.184953+0000 (oldest deadline 2026-03-10T08:44:02.884547+0000) 2026-03-10T08:44:04.089 INFO:journalctl@ceph.osd.6.vm06.stdout:Mar 10 08:44:03 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-6[69225]: 2026-03-10T08:44:03.774+0000 7fb03eada640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.6 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T08:44:04.089 INFO:journalctl@ceph.osd.6.vm06.stdout:Mar 10 08:44:03 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-6[69225]: 2026-03-10T08:44:03.774+0000 7fb03eada640 -1 osd.6 383 *** Got signal Terminated *** 2026-03-10T08:44:04.089 INFO:journalctl@ceph.osd.6.vm06.stdout:Mar 10 08:44:03 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-6[69225]: 2026-03-10T08:44:03.774+0000 7fb03eada640 -1 osd.6 383 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T08:44:04.089 INFO:journalctl@ceph.osd.7.vm06.stdout:Mar 10 08:44:04 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-7[74328]: 2026-03-10T08:44:04.009+0000 7f7889425640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-10T08:43:31.965856+0000 front 2026-03-10T08:43:31.965803+0000 (oldest deadline 2026-03-10T08:43:56.065348+0000) 2026-03-10T08:44:04.089 INFO:journalctl@ceph.osd.7.vm06.stdout:Mar 10 08:44:04 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-7[74328]: 2026-03-10T08:44:04.009+0000 7f7889425640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.103:6807 osd.1 since back 2026-03-10T08:43:36.065894+0000 front 2026-03-10T08:43:36.065999+0000 (oldest deadline 2026-03-10T08:44:01.365648+0000) 2026-03-10T08:44:05.020 INFO:journalctl@ceph.osd.6.vm06.stdout:Mar 10 08:44:04 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-6[69225]: 2026-03-10T08:44:04.685+0000 7fb03b0f3640 -1 osd.6 383 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-10T08:43:31.184539+0000 front 2026-03-10T08:43:31.184467+0000 (oldest deadline 2026-03-10T08:43:55.884073+0000) 2026-03-10T08:44:05.020 INFO:journalctl@ceph.osd.6.vm06.stdout:Mar 10 08:44:04 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-6[69225]: 2026-03-10T08:44:04.685+0000 7fb03b0f3640 -1 osd.6 383 heartbeat_check: no reply from 192.168.123.103:6807 osd.1 since back 2026-03-10T08:43:38.184890+0000 front 2026-03-10T08:43:38.184953+0000 (oldest deadline 2026-03-10T08:44:02.884547+0000) 2026-03-10T08:44:05.339 INFO:journalctl@ceph.osd.7.vm06.stdout:Mar 10 08:44:05 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-7[74328]: 2026-03-10T08:44:05.018+0000 7f7889425640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-10T08:43:31.965856+0000 front 2026-03-10T08:43:31.965803+0000 (oldest deadline 2026-03-10T08:43:56.065348+0000) 2026-03-10T08:44:05.339 INFO:journalctl@ceph.osd.7.vm06.stdout:Mar 10 08:44:05 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-7[74328]: 2026-03-10T08:44:05.018+0000 7f7889425640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.103:6807 osd.1 since back 2026-03-10T08:43:36.065894+0000 front 2026-03-10T08:43:36.065999+0000 (oldest deadline 2026-03-10T08:44:01.365648+0000) 2026-03-10T08:44:05.984 INFO:journalctl@ceph.osd.6.vm06.stdout:Mar 10 08:44:05 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-6[69225]: 2026-03-10T08:44:05.704+0000 7fb03b0f3640 -1 osd.6 383 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-10T08:43:31.184539+0000 front 2026-03-10T08:43:31.184467+0000 (oldest deadline 2026-03-10T08:43:55.884073+0000) 2026-03-10T08:44:05.984 INFO:journalctl@ceph.osd.6.vm06.stdout:Mar 10 08:44:05 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-6[69225]: 2026-03-10T08:44:05.704+0000 7fb03b0f3640 -1 osd.6 383 heartbeat_check: no reply from 192.168.123.103:6807 osd.1 since back 2026-03-10T08:43:38.184890+0000 front 2026-03-10T08:43:38.184953+0000 (oldest deadline 2026-03-10T08:44:02.884547+0000) 2026-03-10T08:44:06.339 INFO:journalctl@ceph.osd.7.vm06.stdout:Mar 10 08:44:05 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-7[74328]: 2026-03-10T08:44:05.982+0000 7f7889425640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-10T08:43:31.965856+0000 front 2026-03-10T08:43:31.965803+0000 (oldest deadline 2026-03-10T08:43:56.065348+0000) 2026-03-10T08:44:06.339 INFO:journalctl@ceph.osd.7.vm06.stdout:Mar 10 08:44:05 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-7[74328]: 2026-03-10T08:44:05.982+0000 7f7889425640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.103:6807 osd.1 since back 2026-03-10T08:43:36.065894+0000 front 2026-03-10T08:43:36.065999+0000 (oldest deadline 2026-03-10T08:44:01.365648+0000) 2026-03-10T08:44:06.967 INFO:journalctl@ceph.osd.6.vm06.stdout:Mar 10 08:44:06 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-6[69225]: 2026-03-10T08:44:06.657+0000 7fb03b0f3640 -1 osd.6 383 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-10T08:43:31.184539+0000 front 2026-03-10T08:43:31.184467+0000 (oldest deadline 2026-03-10T08:43:55.884073+0000) 2026-03-10T08:44:06.967 INFO:journalctl@ceph.osd.6.vm06.stdout:Mar 10 08:44:06 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-6[69225]: 2026-03-10T08:44:06.657+0000 7fb03b0f3640 -1 osd.6 383 heartbeat_check: no reply from 192.168.123.103:6807 osd.1 since back 2026-03-10T08:43:38.184890+0000 front 2026-03-10T08:43:38.184953+0000 (oldest deadline 2026-03-10T08:44:02.884547+0000) 2026-03-10T08:44:07.339 INFO:journalctl@ceph.osd.7.vm06.stdout:Mar 10 08:44:06 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-7[74328]: 2026-03-10T08:44:06.966+0000 7f7889425640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-10T08:43:31.965856+0000 front 2026-03-10T08:43:31.965803+0000 (oldest deadline 2026-03-10T08:43:56.065348+0000) 2026-03-10T08:44:07.339 INFO:journalctl@ceph.osd.7.vm06.stdout:Mar 10 08:44:06 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-7[74328]: 2026-03-10T08:44:06.966+0000 7f7889425640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.103:6807 osd.1 since back 2026-03-10T08:43:36.065894+0000 front 2026-03-10T08:43:36.065999+0000 (oldest deadline 2026-03-10T08:44:01.365648+0000) 2026-03-10T08:44:08.001 INFO:journalctl@ceph.osd.6.vm06.stdout:Mar 10 08:44:07 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-6[69225]: 2026-03-10T08:44:07.700+0000 7fb03b0f3640 -1 osd.6 383 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-10T08:43:31.184539+0000 front 2026-03-10T08:43:31.184467+0000 (oldest deadline 2026-03-10T08:43:55.884073+0000) 2026-03-10T08:44:08.001 INFO:journalctl@ceph.osd.6.vm06.stdout:Mar 10 08:44:07 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-6[69225]: 2026-03-10T08:44:07.700+0000 7fb03b0f3640 -1 osd.6 383 heartbeat_check: no reply from 192.168.123.103:6807 osd.1 since back 2026-03-10T08:43:38.184890+0000 front 2026-03-10T08:43:38.184953+0000 (oldest deadline 2026-03-10T08:44:02.884547+0000) 2026-03-10T08:44:08.339 INFO:journalctl@ceph.osd.7.vm06.stdout:Mar 10 08:44:08 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-7[74328]: 2026-03-10T08:44:07.998+0000 7f7889425640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-10T08:43:31.965856+0000 front 2026-03-10T08:43:31.965803+0000 (oldest deadline 2026-03-10T08:43:56.065348+0000) 2026-03-10T08:44:08.339 INFO:journalctl@ceph.osd.7.vm06.stdout:Mar 10 08:44:08 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-7[74328]: 2026-03-10T08:44:07.998+0000 7f7889425640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.103:6807 osd.1 since back 2026-03-10T08:43:36.065894+0000 front 2026-03-10T08:43:36.065999+0000 (oldest deadline 2026-03-10T08:44:01.365648+0000) 2026-03-10T08:44:08.339 INFO:journalctl@ceph.osd.7.vm06.stdout:Mar 10 08:44:08 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-7[74328]: 2026-03-10T08:44:07.998+0000 7f7889425640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.103:6811 osd.2 since back 2026-03-10T08:43:44.866331+0000 front 2026-03-10T08:43:44.866246+0000 (oldest deadline 2026-03-10T08:44:07.766037+0000) 2026-03-10T08:44:08.960 INFO:journalctl@ceph.osd.6.vm06.stdout:Mar 10 08:44:08 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-6[69225]: 2026-03-10T08:44:08.695+0000 7fb03b0f3640 -1 osd.6 383 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-10T08:43:31.184539+0000 front 2026-03-10T08:43:31.184467+0000 (oldest deadline 2026-03-10T08:43:55.884073+0000) 2026-03-10T08:44:08.960 INFO:journalctl@ceph.osd.6.vm06.stdout:Mar 10 08:44:08 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-6[69225]: 2026-03-10T08:44:08.695+0000 7fb03b0f3640 -1 osd.6 383 heartbeat_check: no reply from 192.168.123.103:6807 osd.1 since back 2026-03-10T08:43:38.184890+0000 front 2026-03-10T08:43:38.184953+0000 (oldest deadline 2026-03-10T08:44:02.884547+0000) 2026-03-10T08:44:08.960 INFO:journalctl@ceph.osd.6.vm06.stdout:Mar 10 08:44:08 vm06 podman[83531]: 2026-03-10 08:44:08.802376577 +0000 UTC m=+5.038514069 container died 5795909369a6bdcd6376a14bcccc01e67f9027c54b3355b09fb2ed6de582b12a (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-6, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-10T08:44:08.960 INFO:journalctl@ceph.osd.6.vm06.stdout:Mar 10 08:44:08 vm06 podman[83531]: 2026-03-10 08:44:08.822835643 +0000 UTC m=+5.058973135 container remove 5795909369a6bdcd6376a14bcccc01e67f9027c54b3355b09fb2ed6de582b12a (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-6, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True) 2026-03-10T08:44:08.960 INFO:journalctl@ceph.osd.6.vm06.stdout:Mar 10 08:44:08 vm06 bash[83531]: ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-6 2026-03-10T08:44:09.162 DEBUG:teuthology.orchestra.run.vm06:> sudo pkill -f 'journalctl -f -n 0 -u ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@osd.6.service' 2026-03-10T08:44:09.191 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T08:44:09.191 INFO:tasks.cephadm.osd.6:Stopped osd.6 2026-03-10T08:44:09.191 INFO:tasks.cephadm.osd.7:Stopping osd.7... 2026-03-10T08:44:09.191 DEBUG:teuthology.orchestra.run.vm06:> sudo systemctl stop ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@osd.7 2026-03-10T08:44:09.252 INFO:journalctl@ceph.osd.7.vm06.stdout:Mar 10 08:44:09 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-7[74328]: 2026-03-10T08:44:09.014+0000 7f7889425640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-10T08:43:31.965856+0000 front 2026-03-10T08:43:31.965803+0000 (oldest deadline 2026-03-10T08:43:56.065348+0000) 2026-03-10T08:44:09.252 INFO:journalctl@ceph.osd.7.vm06.stdout:Mar 10 08:44:09 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-7[74328]: 2026-03-10T08:44:09.014+0000 7f7889425640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.103:6807 osd.1 since back 2026-03-10T08:43:36.065894+0000 front 2026-03-10T08:43:36.065999+0000 (oldest deadline 2026-03-10T08:44:01.365648+0000) 2026-03-10T08:44:09.252 INFO:journalctl@ceph.osd.7.vm06.stdout:Mar 10 08:44:09 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-7[74328]: 2026-03-10T08:44:09.014+0000 7f7889425640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.103:6811 osd.2 since back 2026-03-10T08:43:44.866331+0000 front 2026-03-10T08:43:44.866246+0000 (oldest deadline 2026-03-10T08:44:07.766037+0000) 2026-03-10T08:44:09.540 INFO:journalctl@ceph.osd.7.vm06.stdout:Mar 10 08:44:09 vm06 systemd[1]: Stopping Ceph osd.7 for aaf0329a-1c5b-11f1-8b6f-7f2d819bb543... 2026-03-10T08:44:09.540 INFO:journalctl@ceph.osd.7.vm06.stdout:Mar 10 08:44:09 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-7[74328]: 2026-03-10T08:44:09.328+0000 7f788d60d640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.7 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T08:44:09.540 INFO:journalctl@ceph.osd.7.vm06.stdout:Mar 10 08:44:09 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-7[74328]: 2026-03-10T08:44:09.328+0000 7f788d60d640 -1 osd.7 383 *** Got signal Terminated *** 2026-03-10T08:44:09.540 INFO:journalctl@ceph.osd.7.vm06.stdout:Mar 10 08:44:09 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-7[74328]: 2026-03-10T08:44:09.328+0000 7f788d60d640 -1 osd.7 383 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T08:44:10.089 INFO:journalctl@ceph.osd.7.vm06.stdout:Mar 10 08:44:09 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-7[74328]: 2026-03-10T08:44:09.972+0000 7f7889425640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-10T08:43:31.965856+0000 front 2026-03-10T08:43:31.965803+0000 (oldest deadline 2026-03-10T08:43:56.065348+0000) 2026-03-10T08:44:10.089 INFO:journalctl@ceph.osd.7.vm06.stdout:Mar 10 08:44:09 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-7[74328]: 2026-03-10T08:44:09.972+0000 7f7889425640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.103:6807 osd.1 since back 2026-03-10T08:43:36.065894+0000 front 2026-03-10T08:43:36.065999+0000 (oldest deadline 2026-03-10T08:44:01.365648+0000) 2026-03-10T08:44:10.089 INFO:journalctl@ceph.osd.7.vm06.stdout:Mar 10 08:44:09 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-7[74328]: 2026-03-10T08:44:09.972+0000 7f7889425640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.103:6811 osd.2 since back 2026-03-10T08:43:44.866331+0000 front 2026-03-10T08:43:44.866246+0000 (oldest deadline 2026-03-10T08:44:07.766037+0000) 2026-03-10T08:44:11.339 INFO:journalctl@ceph.osd.7.vm06.stdout:Mar 10 08:44:10 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-7[74328]: 2026-03-10T08:44:10.991+0000 7f7889425640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-10T08:43:31.965856+0000 front 2026-03-10T08:43:31.965803+0000 (oldest deadline 2026-03-10T08:43:56.065348+0000) 2026-03-10T08:44:11.339 INFO:journalctl@ceph.osd.7.vm06.stdout:Mar 10 08:44:10 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-7[74328]: 2026-03-10T08:44:10.991+0000 7f7889425640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.103:6807 osd.1 since back 2026-03-10T08:43:36.065894+0000 front 2026-03-10T08:43:36.065999+0000 (oldest deadline 2026-03-10T08:44:01.365648+0000) 2026-03-10T08:44:11.339 INFO:journalctl@ceph.osd.7.vm06.stdout:Mar 10 08:44:10 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-7[74328]: 2026-03-10T08:44:10.991+0000 7f7889425640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.103:6811 osd.2 since back 2026-03-10T08:43:44.866331+0000 front 2026-03-10T08:43:44.866246+0000 (oldest deadline 2026-03-10T08:44:07.766037+0000) 2026-03-10T08:44:12.339 INFO:journalctl@ceph.osd.7.vm06.stdout:Mar 10 08:44:12 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-7[74328]: 2026-03-10T08:44:12.016+0000 7f7889425640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-10T08:43:31.965856+0000 front 2026-03-10T08:43:31.965803+0000 (oldest deadline 2026-03-10T08:43:56.065348+0000) 2026-03-10T08:44:12.339 INFO:journalctl@ceph.osd.7.vm06.stdout:Mar 10 08:44:12 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-7[74328]: 2026-03-10T08:44:12.016+0000 7f7889425640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.103:6807 osd.1 since back 2026-03-10T08:43:36.065894+0000 front 2026-03-10T08:43:36.065999+0000 (oldest deadline 2026-03-10T08:44:01.365648+0000) 2026-03-10T08:44:12.339 INFO:journalctl@ceph.osd.7.vm06.stdout:Mar 10 08:44:12 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-7[74328]: 2026-03-10T08:44:12.016+0000 7f7889425640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.103:6811 osd.2 since back 2026-03-10T08:43:44.866331+0000 front 2026-03-10T08:43:44.866246+0000 (oldest deadline 2026-03-10T08:44:07.766037+0000) 2026-03-10T08:44:13.339 INFO:journalctl@ceph.osd.7.vm06.stdout:Mar 10 08:44:12 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-7[74328]: 2026-03-10T08:44:12.976+0000 7f7889425640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-10T08:43:31.965856+0000 front 2026-03-10T08:43:31.965803+0000 (oldest deadline 2026-03-10T08:43:56.065348+0000) 2026-03-10T08:44:13.339 INFO:journalctl@ceph.osd.7.vm06.stdout:Mar 10 08:44:12 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-7[74328]: 2026-03-10T08:44:12.976+0000 7f7889425640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.103:6807 osd.1 since back 2026-03-10T08:43:36.065894+0000 front 2026-03-10T08:43:36.065999+0000 (oldest deadline 2026-03-10T08:44:01.365648+0000) 2026-03-10T08:44:13.339 INFO:journalctl@ceph.osd.7.vm06.stdout:Mar 10 08:44:12 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-7[74328]: 2026-03-10T08:44:12.976+0000 7f7889425640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.103:6811 osd.2 since back 2026-03-10T08:43:44.866331+0000 front 2026-03-10T08:43:44.866246+0000 (oldest deadline 2026-03-10T08:44:07.766037+0000) 2026-03-10T08:44:14.339 INFO:journalctl@ceph.osd.7.vm06.stdout:Mar 10 08:44:13 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-7[74328]: 2026-03-10T08:44:13.965+0000 7f7889425640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-10T08:43:31.965856+0000 front 2026-03-10T08:43:31.965803+0000 (oldest deadline 2026-03-10T08:43:56.065348+0000) 2026-03-10T08:44:14.339 INFO:journalctl@ceph.osd.7.vm06.stdout:Mar 10 08:44:13 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-7[74328]: 2026-03-10T08:44:13.965+0000 7f7889425640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.103:6807 osd.1 since back 2026-03-10T08:43:36.065894+0000 front 2026-03-10T08:43:36.065999+0000 (oldest deadline 2026-03-10T08:44:01.365648+0000) 2026-03-10T08:44:14.339 INFO:journalctl@ceph.osd.7.vm06.stdout:Mar 10 08:44:13 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-7[74328]: 2026-03-10T08:44:13.965+0000 7f7889425640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.103:6811 osd.2 since back 2026-03-10T08:43:44.866331+0000 front 2026-03-10T08:43:44.866246+0000 (oldest deadline 2026-03-10T08:44:07.766037+0000) 2026-03-10T08:44:14.339 INFO:journalctl@ceph.osd.7.vm06.stdout:Mar 10 08:44:13 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-7[74328]: 2026-03-10T08:44:13.965+0000 7f7889425640 -1 osd.7 383 heartbeat_check: no reply from 192.168.123.103:6815 osd.3 since back 2026-03-10T08:43:47.766381+0000 front 2026-03-10T08:43:47.766427+0000 (oldest deadline 2026-03-10T08:44:13.666250+0000) 2026-03-10T08:44:14.680 INFO:journalctl@ceph.osd.7.vm06.stdout:Mar 10 08:44:14 vm06 podman[83693]: 2026-03-10 08:44:14.370826046 +0000 UTC m=+5.054346124 container died 9c6225b1d6cc323c8d1c6427a06f970d60e7ba2241f012ba9ecb0c333c7fd8e8 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-7, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223) 2026-03-10T08:44:14.680 INFO:journalctl@ceph.osd.7.vm06.stdout:Mar 10 08:44:14 vm06 podman[83693]: 2026-03-10 08:44:14.391189273 +0000 UTC m=+5.074709351 container remove 9c6225b1d6cc323c8d1c6427a06f970d60e7ba2241f012ba9ecb0c333c7fd8e8 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-7, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) 2026-03-10T08:44:14.680 INFO:journalctl@ceph.osd.7.vm06.stdout:Mar 10 08:44:14 vm06 bash[83693]: ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-7 2026-03-10T08:44:14.680 INFO:journalctl@ceph.osd.7.vm06.stdout:Mar 10 08:44:14 vm06 podman[83776]: 2026-03-10 08:44:14.520927347 +0000 UTC m=+0.014792390 container create dc52d8cbc2bcbe8f924dbee9aac94016ddcc49534566df18e440bc5d676f49af (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-7-deactivate, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True) 2026-03-10T08:44:14.680 INFO:journalctl@ceph.osd.7.vm06.stdout:Mar 10 08:44:14 vm06 podman[83776]: 2026-03-10 08:44:14.558108866 +0000 UTC m=+0.051973909 container init dc52d8cbc2bcbe8f924dbee9aac94016ddcc49534566df18e440bc5d676f49af (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-7-deactivate, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-10T08:44:14.680 INFO:journalctl@ceph.osd.7.vm06.stdout:Mar 10 08:44:14 vm06 podman[83776]: 2026-03-10 08:44:14.560922463 +0000 UTC m=+0.054787505 container start dc52d8cbc2bcbe8f924dbee9aac94016ddcc49534566df18e440bc5d676f49af (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-7-deactivate, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default) 2026-03-10T08:44:14.680 INFO:journalctl@ceph.osd.7.vm06.stdout:Mar 10 08:44:14 vm06 podman[83776]: 2026-03-10 08:44:14.566654393 +0000 UTC m=+0.060519435 container attach dc52d8cbc2bcbe8f924dbee9aac94016ddcc49534566df18e440bc5d676f49af (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-7-deactivate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3) 2026-03-10T08:44:14.680 INFO:journalctl@ceph.osd.7.vm06.stdout:Mar 10 08:44:14 vm06 podman[83776]: 2026-03-10 08:44:14.514935722 +0000 UTC m=+0.008800784 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T08:44:14.680 INFO:journalctl@ceph.osd.7.vm06.stdout:Mar 10 08:44:14 vm06 podman[83776]: 2026-03-10 08:44:14.679739583 +0000 UTC m=+0.173604636 container died dc52d8cbc2bcbe8f924dbee9aac94016ddcc49534566df18e440bc5d676f49af (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-osd-7-deactivate, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True) 2026-03-10T08:44:14.705 DEBUG:teuthology.orchestra.run.vm06:> sudo pkill -f 'journalctl -f -n 0 -u ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@osd.7.service' 2026-03-10T08:44:14.775 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T08:44:14.775 INFO:tasks.cephadm.osd.7:Stopped osd.7 2026-03-10T08:44:14.775 INFO:tasks.cephadm.ceph.rgw.foo.a:Stopping rgw.foo.a... 2026-03-10T08:44:14.775 DEBUG:teuthology.orchestra.run.vm03:> sudo systemctl stop ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@rgw.foo.a 2026-03-10T08:44:15.178 INFO:journalctl@ceph.rgw.foo.a.vm03.stdout:Mar 10 08:44:14 vm03 systemd[1]: Stopping Ceph rgw.foo.a for aaf0329a-1c5b-11f1-8b6f-7f2d819bb543... 2026-03-10T08:44:15.178 INFO:journalctl@ceph.rgw.foo.a.vm03.stdout:Mar 10 08:44:14 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-rgw-foo-a[80324]: 2026-03-10T08:44:14.864+0000 7f1269a57640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/radosgw -n client.rgw.foo.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T08:44:15.178 INFO:journalctl@ceph.rgw.foo.a.vm03.stdout:Mar 10 08:44:14 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-rgw-foo-a[80324]: 2026-03-10T08:44:14.865+0000 7f126d2c6980 -1 shutting down 2026-03-10T08:44:24.956 DEBUG:teuthology.orchestra.run.vm03:> sudo pkill -f 'journalctl -f -n 0 -u ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@rgw.foo.a.service' 2026-03-10T08:44:24.985 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T08:44:24.985 INFO:tasks.cephadm.ceph.rgw.foo.a:Stopped rgw.foo.a 2026-03-10T08:44:24.985 INFO:tasks.cephadm.prometheus.a:Stopping prometheus.a... 2026-03-10T08:44:24.985 DEBUG:teuthology.orchestra.run.vm06:> sudo systemctl stop ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@prometheus.a 2026-03-10T08:44:25.168 DEBUG:teuthology.orchestra.run.vm06:> sudo pkill -f 'journalctl -f -n 0 -u ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@prometheus.a.service' 2026-03-10T08:44:25.199 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T08:44:25.199 INFO:tasks.cephadm.prometheus.a:Stopped prometheus.a 2026-03-10T08:44:25.199 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 --force --keep-logs 2026-03-10T08:44:25.331 INFO:teuthology.orchestra.run.vm03.stdout:Deleting cluster with fsid: aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 2026-03-10T08:44:26.762 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 10 08:44:26 vm03 systemd[1]: Stopping Ceph alertmanager.a for aaf0329a-1c5b-11f1-8b6f-7f2d819bb543... 2026-03-10T08:44:26.762 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 10 08:44:26 vm03 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-alertmanager-a[86268]: ts=2026-03-10T08:44:26.729Z caller=main.go:583 level=info msg="Received SIGTERM, exiting gracefully..." 2026-03-10T08:44:26.762 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 10 08:44:26 vm03 podman[90996]: 2026-03-10 08:44:26.740460095 +0000 UTC m=+0.023832962 container died f60a6222fc42982ee65fcbbdd3de9efd0161aa5e3cfadd6f20c09eff912c67a9 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-alertmanager-a, maintainer=The Prometheus Authors ) 2026-03-10T08:44:26.762 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 10 08:44:26 vm03 podman[90996]: 2026-03-10 08:44:26.754837387 +0000 UTC m=+0.038210263 container remove f60a6222fc42982ee65fcbbdd3de9efd0161aa5e3cfadd6f20c09eff912c67a9 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-alertmanager-a, maintainer=The Prometheus Authors ) 2026-03-10T08:44:26.762 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 10 08:44:26 vm03 podman[90996]: 2026-03-10 08:44:26.756040931 +0000 UTC m=+0.039413807 volume remove ef2e832c51027784a47e177a8df3bb1527b32c7730a1a716f6af31eb42392b8e 2026-03-10T08:44:26.762 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 10 08:44:26 vm03 bash[90996]: ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-alertmanager-a 2026-03-10T08:44:27.055 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 10 08:44:26 vm03 systemd[1]: ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@alertmanager.a.service: Deactivated successfully. 2026-03-10T08:44:27.055 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 10 08:44:26 vm03 systemd[1]: Stopped Ceph alertmanager.a for aaf0329a-1c5b-11f1-8b6f-7f2d819bb543. 2026-03-10T08:44:27.055 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:44:26 vm03 systemd[1]: Stopping Ceph node-exporter.a for aaf0329a-1c5b-11f1-8b6f-7f2d819bb543... 2026-03-10T08:44:27.055 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:44:27 vm03 podman[91098]: 2026-03-10 08:44:27.039473926 +0000 UTC m=+0.015163055 container died d80da177b8ae53b5fbe0c5b8055ff91ddea30542815ccb27fdcd9597578cd1a1 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-a, maintainer=The Prometheus Authors ) 2026-03-10T08:44:27.055 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:44:27 vm03 podman[91098]: 2026-03-10 08:44:27.051833629 +0000 UTC m=+0.027522759 container remove d80da177b8ae53b5fbe0c5b8055ff91ddea30542815ccb27fdcd9597578cd1a1 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-a, maintainer=The Prometheus Authors ) 2026-03-10T08:44:27.055 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:44:27 vm03 bash[91098]: ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-a 2026-03-10T08:44:27.355 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:44:27 vm03 systemd[1]: ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@node-exporter.a.service: Main process exited, code=exited, status=143/n/a 2026-03-10T08:44:27.355 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:44:27 vm03 systemd[1]: ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@node-exporter.a.service: Failed with result 'exit-code'. 2026-03-10T08:44:27.355 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:44:27 vm03 systemd[1]: Stopped Ceph node-exporter.a for aaf0329a-1c5b-11f1-8b6f-7f2d819bb543. 2026-03-10T08:44:27.355 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 10 08:44:27 vm03 systemd[1]: ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@node-exporter.a.service: Consumed 1.113s CPU time. 2026-03-10T08:44:27.694 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 --force --keep-logs 2026-03-10T08:44:27.820 INFO:teuthology.orchestra.run.vm06.stdout:Deleting cluster with fsid: aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 2026-03-10T08:44:28.920 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:44:28 vm06 systemd[1]: Stopping Ceph iscsi.iscsi.a for aaf0329a-1c5b-11f1-8b6f-7f2d819bb543... 2026-03-10T08:44:29.339 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:44:28 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a[78511]: debug Shutdown received 2026-03-10T08:44:39.271 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:44:38 vm06 bash[84293]: time="2026-03-10T08:44:38Z" level=warning msg="StopSignal SIGTERM failed to stop container ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a in 10 seconds, resorting to SIGKILL" 2026-03-10T08:44:39.271 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:44:39 vm06 podman[84293]: 2026-03-10 08:44:39.009118405 +0000 UTC m=+10.037948459 container died fdebdac5e54aea5a4e4ddfe10cc350120d84611e9b75a6c97c8ba906615949d6 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a, ceph=True, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-10T08:44:39.271 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:44:39 vm06 podman[84293]: 2026-03-10 08:44:39.031576965 +0000 UTC m=+10.060407019 container remove fdebdac5e54aea5a4e4ddfe10cc350120d84611e9b75a6c97c8ba906615949d6 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-10T08:44:39.271 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:44:39 vm06 bash[84293]: ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-iscsi-iscsi-a 2026-03-10T08:44:39.271 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:44:39 vm06 systemd[1]: ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@iscsi.iscsi.a.service: Main process exited, code=exited, status=137/n/a 2026-03-10T08:44:39.271 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:44:39 vm06 systemd[1]: ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@iscsi.iscsi.a.service: Failed with result 'exit-code'. 2026-03-10T08:44:39.271 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:44:39 vm06 systemd[1]: Stopped Ceph iscsi.iscsi.a for aaf0329a-1c5b-11f1-8b6f-7f2d819bb543. 2026-03-10T08:44:39.271 INFO:journalctl@ceph.iscsi.iscsi.a.vm06.stdout:Mar 10 08:44:39 vm06 systemd[1]: ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@iscsi.iscsi.a.service: Consumed 1.204s CPU time. 2026-03-10T08:44:39.822 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:44:39 vm06 systemd[1]: Stopping Ceph grafana.a for aaf0329a-1c5b-11f1-8b6f-7f2d819bb543... 2026-03-10T08:44:39.822 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:44:39 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=server t=2026-03-10T08:44:39.782904539Z level=info msg="Shutdown started" reason="System signal: terminated" 2026-03-10T08:44:39.822 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:44:39 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=tracing t=2026-03-10T08:44:39.783394076Z level=info msg="Closing tracing" 2026-03-10T08:44:39.822 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:44:39 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=ticker t=2026-03-10T08:44:39.784295312Z level=info msg=stopped last_tick=2026-03-10T08:44:30Z 2026-03-10T08:44:39.822 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:44:39 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=grafana-apiserver t=2026-03-10T08:44:39.784420497Z level=info msg="StorageObjectCountTracker pruner is exiting" 2026-03-10T08:44:39.822 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:44:39 vm06 ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a[80576]: logger=sqlstore.transactions t=2026-03-10T08:44:39.795052759Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 2026-03-10T08:44:39.822 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:44:39 vm06 podman[84537]: 2026-03-10 08:44:39.80555122 +0000 UTC m=+0.035857782 container died 5df7ddb3dabb26331b64b3d22e4d7621ea6b0f000922d8ed4a999cb8a38dcaad (image=quay.io/ceph/grafana:10.4.0, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a, maintainer=Grafana Labs ) 2026-03-10T08:44:40.089 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:44:40 vm06 systemd[1]: Stopping Ceph node-exporter.b for aaf0329a-1c5b-11f1-8b6f-7f2d819bb543... 2026-03-10T08:44:40.089 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:44:39 vm06 podman[84537]: 2026-03-10 08:44:39.821294701 +0000 UTC m=+0.051601263 container remove 5df7ddb3dabb26331b64b3d22e4d7621ea6b0f000922d8ed4a999cb8a38dcaad (image=quay.io/ceph/grafana:10.4.0, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a, maintainer=Grafana Labs ) 2026-03-10T08:44:40.089 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:44:39 vm06 bash[84537]: ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-grafana-a 2026-03-10T08:44:40.089 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:44:39 vm06 systemd[1]: ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@grafana.a.service: Deactivated successfully. 2026-03-10T08:44:40.089 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:44:39 vm06 systemd[1]: Stopped Ceph grafana.a for aaf0329a-1c5b-11f1-8b6f-7f2d819bb543. 2026-03-10T08:44:40.089 INFO:journalctl@ceph.grafana.a.vm06.stdout:Mar 10 08:44:39 vm06 systemd[1]: ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@grafana.a.service: Consumed 3.462s CPU time. 2026-03-10T08:44:40.344 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:44:40 vm06 podman[84639]: 2026-03-10 08:44:40.116893702 +0000 UTC m=+0.023687601 container died b8061d8ffe75cec153f8abee67d6084c7737a3aa6449c2d45f90aa6a2bb328db (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-b, maintainer=The Prometheus Authors ) 2026-03-10T08:44:40.344 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:44:40 vm06 podman[84639]: 2026-03-10 08:44:40.130313632 +0000 UTC m=+0.037107521 container remove b8061d8ffe75cec153f8abee67d6084c7737a3aa6449c2d45f90aa6a2bb328db (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-b, maintainer=The Prometheus Authors ) 2026-03-10T08:44:40.344 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:44:40 vm06 bash[84639]: ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543-node-exporter-b 2026-03-10T08:44:40.345 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:44:40 vm06 systemd[1]: ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@node-exporter.b.service: Main process exited, code=exited, status=143/n/a 2026-03-10T08:44:40.345 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:44:40 vm06 systemd[1]: ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@node-exporter.b.service: Failed with result 'exit-code'. 2026-03-10T08:44:40.345 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:44:40 vm06 systemd[1]: Stopped Ceph node-exporter.b for aaf0329a-1c5b-11f1-8b6f-7f2d819bb543. 2026-03-10T08:44:40.345 INFO:journalctl@ceph.node-exporter.b.vm06.stdout:Mar 10 08:44:40 vm06 systemd[1]: ceph-aaf0329a-1c5b-11f1-8b6f-7f2d819bb543@node-exporter.b.service: Consumed 1.175s CPU time. 2026-03-10T08:44:40.786 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T08:44:40.812 DEBUG:teuthology.orchestra.run.vm06:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T08:44:40.840 INFO:tasks.cephadm:Archiving crash dumps... 2026-03-10T08:44:40.840 DEBUG:teuthology.misc:Transferring archived files from vm03:/var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/crash to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/964/remote/vm03/crash 2026-03-10T08:44:40.840 DEBUG:teuthology.orchestra.run.vm03:> sudo tar c -f - -C /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/crash -- . 2026-03-10T08:44:40.876 INFO:teuthology.orchestra.run.vm03.stderr:tar: /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/crash: Cannot open: No such file or directory 2026-03-10T08:44:40.876 INFO:teuthology.orchestra.run.vm03.stderr:tar: Error is not recoverable: exiting now 2026-03-10T08:44:40.877 DEBUG:teuthology.misc:Transferring archived files from vm06:/var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/crash to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/964/remote/vm06/crash 2026-03-10T08:44:40.877 DEBUG:teuthology.orchestra.run.vm06:> sudo tar c -f - -C /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/crash -- . 2026-03-10T08:44:40.908 INFO:teuthology.orchestra.run.vm06.stderr:tar: /var/lib/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/crash: Cannot open: No such file or directory 2026-03-10T08:44:40.908 INFO:teuthology.orchestra.run.vm06.stderr:tar: Error is not recoverable: exiting now 2026-03-10T08:44:40.909 INFO:tasks.cephadm:Checking cluster log for badness... 2026-03-10T08:44:40.909 DEBUG:teuthology.orchestra.run.vm03:> sudo egrep '\[ERR\]|\[WRN\]|\[SEC\]' /var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph.log | egrep CEPHADM_ | egrep -v '\(MDS_ALL_DOWN\)' | egrep -v '\(MDS_UP_LESS_THAN_MAX\)' | egrep -v 'but it is still running' | egrep -v 'overall HEALTH_' | egrep -v '\(OSDMAP_FLAGS\)' | egrep -v '\(PG_' | egrep -v '\(OSD_' | egrep -v '\(OBJECT_' | egrep -v '\(POOL_APP_NOT_ENABLED\)' | head -n 1 2026-03-10T08:44:40.947 INFO:tasks.cephadm:Compressing logs... 2026-03-10T08:44:40.947 DEBUG:teuthology.orchestra.run.vm03:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T08:44:40.990 DEBUG:teuthology.orchestra.run.vm06:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T08:44:41.012 INFO:teuthology.orchestra.run.vm03.stderr:find: gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-10T08:44:41.012 INFO:teuthology.orchestra.run.vm03.stderr:‘/var/log/rbd-target-api’: No such file or directory 2026-03-10T08:44:41.013 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph-mon.a.log 2026-03-10T08:44:41.013 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph.log 2026-03-10T08:44:41.014 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/cephadm.log: /var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph-mon.a.log: 91.3% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-10T08:44:41.015 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph-mgr.y.log 2026-03-10T08:44:41.016 INFO:teuthology.orchestra.run.vm06.stderr:find: gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-10T08:44:41.016 INFO:teuthology.orchestra.run.vm06.stderr:‘/var/log/rbd-target-api’: No such file or directory 2026-03-10T08:44:41.017 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph.log: 92.5% -- replaced with /var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph.log.gz 2026-03-10T08:44:41.017 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph.audit.log 2026-03-10T08:44:41.018 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph-volume.log 2026-03-10T08:44:41.018 INFO:teuthology.orchestra.run.vm06.stderr:/var/log/ceph/cephadm.log: gzip -5 --verbose -- /var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph-mon.b.log 2026-03-10T08:44:41.022 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph-mgr.y.log: gzip -5 --verbose -- /var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph.cephadm.log 2026-03-10T08:44:41.023 INFO:teuthology.orchestra.run.vm06.stderr:/var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph-volume.log: gzip -5 --verbose -- /var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph.cephadm.log 2026-03-10T08:44:41.025 INFO:teuthology.orchestra.run.vm06.stderr:/var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph-mon.b.log: 95.4% -- replaced with /var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph-volume.log.gz 2026-03-10T08:44:41.025 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph.audit.log 2026-03-10T08:44:41.026 INFO:teuthology.orchestra.run.vm06.stderr:/var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph.cephadm.log: 80.3% -- replaced with /var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph.cephadm.log.gz 2026-03-10T08:44:41.026 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph.audit.log: 94.1% -- replaced with /var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph.audit.log.gz 2026-03-10T08:44:41.028 INFO:teuthology.orchestra.run.vm06.stderr: 91.3% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-10T08:44:41.028 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph-volume.log 2026-03-10T08:44:41.030 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph.log 2026-03-10T08:44:41.031 INFO:teuthology.orchestra.run.vm06.stderr:/var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph.audit.log: 90.4% -- replaced with /var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph.audit.log.gz 2026-03-10T08:44:41.031 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph.cephadm.log: 88.8% -- replaced with /var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph.cephadm.log.gz 2026-03-10T08:44:41.031 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph-mgr.x.log 2026-03-10T08:44:41.033 INFO:teuthology.orchestra.run.vm06.stderr:/var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph.log: 86.5% -- replaced with /var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph.log.gz 2026-03-10T08:44:41.033 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph-osd.4.log 2026-03-10T08:44:41.033 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph-mon.c.log 2026-03-10T08:44:41.037 INFO:teuthology.orchestra.run.vm06.stderr:/var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph-mgr.x.log: 90.8% -- replaced with /var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph-mgr.x.log.gz 2026-03-10T08:44:41.037 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph-osd.5.log 2026-03-10T08:44:41.044 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph-volume.log: gzip -5 --verbose -- /var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph-osd.0.log 2026-03-10T08:44:41.044 INFO:teuthology.orchestra.run.vm06.stderr:/var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph-osd.4.log: gzip -5 --verbose -- /var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph-osd.6.log 2026-03-10T08:44:41.048 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph-mon.c.log: 95.4% -- replaced with /var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph-volume.log.gz 2026-03-10T08:44:41.048 INFO:teuthology.orchestra.run.vm06.stderr:/var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph-osd.5.log: gzip -5 --verbose -- /var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph-osd.7.log 2026-03-10T08:44:41.052 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph-osd.1.log 2026-03-10T08:44:41.059 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph-osd.0.log: gzip -5 --verbose -- /var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph-osd.2.log 2026-03-10T08:44:41.062 INFO:teuthology.orchestra.run.vm06.stderr:/var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph-osd.6.log: gzip -5 --verbose -- /var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/tcmu-runner.log 2026-03-10T08:44:41.069 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph-osd.1.log: gzip -5 --verbose -- /var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph-osd.3.log 2026-03-10T08:44:41.073 INFO:teuthology.orchestra.run.vm06.stderr:/var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph-osd.7.log: /var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/tcmu-runner.log: 63.1% -- replaced with /var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/tcmu-runner.log.gz 2026-03-10T08:44:41.079 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph-osd.2.log: gzip -5 --verbose -- /var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph-client.rgw.foo.a.log 2026-03-10T08:44:41.091 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph-osd.3.log: /var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph-client.rgw.foo.a.log: 58.4% -- replaced with /var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph-client.rgw.foo.a.log.gz 2026-03-10T08:44:41.281 INFO:teuthology.orchestra.run.vm03.stderr: 89.4% -- replaced with /var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph-mgr.y.log.gz 2026-03-10T08:44:41.463 INFO:teuthology.orchestra.run.vm06.stderr: 91.1% -- replaced with /var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph-mon.b.log.gz 2026-03-10T08:44:41.544 INFO:teuthology.orchestra.run.vm03.stderr: 91.6% -- replaced with /var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph-mon.c.log.gz 2026-03-10T08:44:42.009 INFO:teuthology.orchestra.run.vm03.stderr: 91.2% -- replaced with /var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph-mon.a.log.gz 2026-03-10T08:44:43.473 INFO:teuthology.orchestra.run.vm03.stderr: 94.6% -- replaced with /var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph-osd.2.log.gz 2026-03-10T08:44:43.649 INFO:teuthology.orchestra.run.vm06.stderr: 94.7% -- replaced with /var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph-osd.5.log.gz 2026-03-10T08:44:43.730 INFO:teuthology.orchestra.run.vm03.stderr: 94.7% -- replaced with /var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph-osd.1.log.gz 2026-03-10T08:44:43.741 INFO:teuthology.orchestra.run.vm03.stderr: 94.8% -- replaced with /var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph-osd.0.log.gz 2026-03-10T08:44:43.833 INFO:teuthology.orchestra.run.vm06.stderr: 94.6% -- replaced with /var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph-osd.6.log.gz 2026-03-10T08:44:43.881 INFO:teuthology.orchestra.run.vm03.stderr: 94.8% -- replaced with /var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph-osd.3.log.gz 2026-03-10T08:44:43.883 INFO:teuthology.orchestra.run.vm03.stderr: 2026-03-10T08:44:43.883 INFO:teuthology.orchestra.run.vm03.stderr:real 0m2.880s 2026-03-10T08:44:43.883 INFO:teuthology.orchestra.run.vm03.stderr:user 0m5.338s 2026-03-10T08:44:43.883 INFO:teuthology.orchestra.run.vm03.stderr:sys 0m0.265s 2026-03-10T08:44:43.889 INFO:teuthology.orchestra.run.vm06.stderr: 94.7% -- replaced with /var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph-osd.7.log.gz 2026-03-10T08:44:43.922 INFO:teuthology.orchestra.run.vm06.stderr: 94.8% -- replaced with /var/log/ceph/aaf0329a-1c5b-11f1-8b6f-7f2d819bb543/ceph-osd.4.log.gz 2026-03-10T08:44:43.923 INFO:teuthology.orchestra.run.vm06.stderr: 2026-03-10T08:44:43.923 INFO:teuthology.orchestra.run.vm06.stderr:real 0m2.917s 2026-03-10T08:44:43.923 INFO:teuthology.orchestra.run.vm06.stderr:user 0m4.882s 2026-03-10T08:44:43.923 INFO:teuthology.orchestra.run.vm06.stderr:sys 0m0.215s 2026-03-10T08:44:43.924 INFO:tasks.cephadm:Archiving logs... 2026-03-10T08:44:43.924 DEBUG:teuthology.misc:Transferring archived files from vm03:/var/log/ceph to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/964/remote/vm03/log 2026-03-10T08:44:43.924 DEBUG:teuthology.orchestra.run.vm03:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-10T08:44:44.205 DEBUG:teuthology.misc:Transferring archived files from vm06:/var/log/ceph to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/964/remote/vm06/log 2026-03-10T08:44:44.205 DEBUG:teuthology.orchestra.run.vm06:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-10T08:44:44.460 INFO:tasks.cephadm:Removing cluster... 2026-03-10T08:44:44.460 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 --force 2026-03-10T08:44:44.582 INFO:teuthology.orchestra.run.vm03.stdout:Deleting cluster with fsid: aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 2026-03-10T08:44:44.808 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 --force 2026-03-10T08:44:44.929 INFO:teuthology.orchestra.run.vm06.stdout:Deleting cluster with fsid: aaf0329a-1c5b-11f1-8b6f-7f2d819bb543 2026-03-10T08:44:45.144 INFO:tasks.cephadm:Removing cephadm ... 2026-03-10T08:44:45.144 DEBUG:teuthology.orchestra.run.vm03:> rm -rf /home/ubuntu/cephtest/cephadm 2026-03-10T08:44:45.158 DEBUG:teuthology.orchestra.run.vm06:> rm -rf /home/ubuntu/cephtest/cephadm 2026-03-10T08:44:45.172 INFO:tasks.cephadm:Teardown complete 2026-03-10T08:44:45.172 DEBUG:teuthology.run_tasks:Unwinding manager install 2026-03-10T08:44:45.174 INFO:teuthology.task.install.util:Removing shipped files: /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer... 2026-03-10T08:44:45.174 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-10T08:44:45.200 DEBUG:teuthology.orchestra.run.vm06:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-10T08:44:45.242 INFO:teuthology.task.install.rpm:Removing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd on rpm system. 2026-03-10T08:44:45.242 DEBUG:teuthology.orchestra.run.vm03:> 2026-03-10T08:44:45.242 DEBUG:teuthology.orchestra.run.vm03:> for d in ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd ; do 2026-03-10T08:44:45.242 DEBUG:teuthology.orchestra.run.vm03:> sudo yum -y remove $d || true 2026-03-10T08:44:45.242 DEBUG:teuthology.orchestra.run.vm03:> done 2026-03-10T08:44:45.247 INFO:teuthology.task.install.rpm:Removing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd on rpm system. 2026-03-10T08:44:45.247 DEBUG:teuthology.orchestra.run.vm06:> 2026-03-10T08:44:45.247 DEBUG:teuthology.orchestra.run.vm06:> for d in ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd ; do 2026-03-10T08:44:45.247 DEBUG:teuthology.orchestra.run.vm06:> sudo yum -y remove $d || true 2026-03-10T08:44:45.247 DEBUG:teuthology.orchestra.run.vm06:> done 2026-03-10T08:44:45.431 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-10T08:44:45.431 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-10T08:44:45.431 INFO:teuthology.orchestra.run.vm03.stdout: Package Arch Version Repository Size 2026-03-10T08:44:45.431 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-10T08:44:45.432 INFO:teuthology.orchestra.run.vm03.stdout:Removing: 2026-03-10T08:44:45.432 INFO:teuthology.orchestra.run.vm03.stdout: ceph-radosgw x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 39 M 2026-03-10T08:44:45.432 INFO:teuthology.orchestra.run.vm03.stdout:Removing unused dependencies: 2026-03-10T08:44:45.432 INFO:teuthology.orchestra.run.vm03.stdout: mailcap noarch 2.1.49-5.el9 @baseos 78 k 2026-03-10T08:44:45.432 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:44:45.432 INFO:teuthology.orchestra.run.vm03.stdout:Transaction Summary 2026-03-10T08:44:45.432 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-10T08:44:45.432 INFO:teuthology.orchestra.run.vm03.stdout:Remove 2 Packages 2026-03-10T08:44:45.432 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:44:45.432 INFO:teuthology.orchestra.run.vm03.stdout:Freed space: 39 M 2026-03-10T08:44:45.432 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction check 2026-03-10T08:44:45.434 INFO:teuthology.orchestra.run.vm03.stdout:Transaction check succeeded. 2026-03-10T08:44:45.435 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction test 2026-03-10T08:44:45.444 INFO:teuthology.orchestra.run.vm06.stdout:Dependencies resolved. 2026-03-10T08:44:45.445 INFO:teuthology.orchestra.run.vm06.stdout:================================================================================ 2026-03-10T08:44:45.445 INFO:teuthology.orchestra.run.vm06.stdout: Package Arch Version Repository Size 2026-03-10T08:44:45.445 INFO:teuthology.orchestra.run.vm06.stdout:================================================================================ 2026-03-10T08:44:45.445 INFO:teuthology.orchestra.run.vm06.stdout:Removing: 2026-03-10T08:44:45.445 INFO:teuthology.orchestra.run.vm06.stdout: ceph-radosgw x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 39 M 2026-03-10T08:44:45.445 INFO:teuthology.orchestra.run.vm06.stdout:Removing unused dependencies: 2026-03-10T08:44:45.445 INFO:teuthology.orchestra.run.vm06.stdout: mailcap noarch 2.1.49-5.el9 @baseos 78 k 2026-03-10T08:44:45.445 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:44:45.445 INFO:teuthology.orchestra.run.vm06.stdout:Transaction Summary 2026-03-10T08:44:45.445 INFO:teuthology.orchestra.run.vm06.stdout:================================================================================ 2026-03-10T08:44:45.445 INFO:teuthology.orchestra.run.vm06.stdout:Remove 2 Packages 2026-03-10T08:44:45.445 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:44:45.445 INFO:teuthology.orchestra.run.vm06.stdout:Freed space: 39 M 2026-03-10T08:44:45.445 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction check 2026-03-10T08:44:45.447 INFO:teuthology.orchestra.run.vm06.stdout:Transaction check succeeded. 2026-03-10T08:44:45.447 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction test 2026-03-10T08:44:45.448 INFO:teuthology.orchestra.run.vm03.stdout:Transaction test succeeded. 2026-03-10T08:44:45.448 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction 2026-03-10T08:44:45.460 INFO:teuthology.orchestra.run.vm06.stdout:Transaction test succeeded. 2026-03-10T08:44:45.460 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction 2026-03-10T08:44:45.477 INFO:teuthology.orchestra.run.vm03.stdout: Preparing : 1/1 2026-03-10T08:44:45.489 INFO:teuthology.orchestra.run.vm06.stdout: Preparing : 1/1 2026-03-10T08:44:45.500 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T08:44:45.500 INFO:teuthology.orchestra.run.vm03.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T08:44:45.500 INFO:teuthology.orchestra.run.vm03.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-10T08:44:45.500 INFO:teuthology.orchestra.run.vm03.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-radosgw.target". 2026-03-10T08:44:45.500 INFO:teuthology.orchestra.run.vm03.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-radosgw.target". 2026-03-10T08:44:45.500 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:44:45.503 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T08:44:45.509 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T08:44:45.509 INFO:teuthology.orchestra.run.vm06.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T08:44:45.509 INFO:teuthology.orchestra.run.vm06.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-10T08:44:45.509 INFO:teuthology.orchestra.run.vm06.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-radosgw.target". 2026-03-10T08:44:45.509 INFO:teuthology.orchestra.run.vm06.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-radosgw.target". 2026-03-10T08:44:45.509 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:44:45.512 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T08:44:45.512 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T08:44:45.521 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T08:44:45.527 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-10T08:44:45.536 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-10T08:44:45.590 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: mailcap-2.1.49-5.el9.noarch 2/2 2026-03-10T08:44:45.590 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T08:44:45.598 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: mailcap-2.1.49-5.el9.noarch 2/2 2026-03-10T08:44:45.598 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T08:44:45.639 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-10T08:44:45.639 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:44:45.639 INFO:teuthology.orchestra.run.vm03.stdout:Removed: 2026-03-10T08:44:45.639 INFO:teuthology.orchestra.run.vm03.stdout: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 mailcap-2.1.49-5.el9.noarch 2026-03-10T08:44:45.639 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:44:45.639 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-10T08:44:45.646 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-10T08:44:45.646 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:44:45.646 INFO:teuthology.orchestra.run.vm06.stdout:Removed: 2026-03-10T08:44:45.646 INFO:teuthology.orchestra.run.vm06.stdout: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 mailcap-2.1.49-5.el9.noarch 2026-03-10T08:44:45.646 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:44:45.646 INFO:teuthology.orchestra.run.vm06.stdout:Complete! 2026-03-10T08:44:45.831 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-10T08:44:45.832 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-10T08:44:45.832 INFO:teuthology.orchestra.run.vm03.stdout: Package Arch Version Repository Size 2026-03-10T08:44:45.832 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-10T08:44:45.832 INFO:teuthology.orchestra.run.vm03.stdout:Removing: 2026-03-10T08:44:45.832 INFO:teuthology.orchestra.run.vm03.stdout: ceph-test x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 210 M 2026-03-10T08:44:45.832 INFO:teuthology.orchestra.run.vm03.stdout:Removing unused dependencies: 2026-03-10T08:44:45.832 INFO:teuthology.orchestra.run.vm03.stdout: libxslt x86_64 1.1.34-12.el9 @appstream 743 k 2026-03-10T08:44:45.832 INFO:teuthology.orchestra.run.vm03.stdout: socat x86_64 1.7.4.1-8.el9 @appstream 1.1 M 2026-03-10T08:44:45.832 INFO:teuthology.orchestra.run.vm03.stdout: xmlstarlet x86_64 1.6.1-20.el9 @appstream 195 k 2026-03-10T08:44:45.832 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:44:45.832 INFO:teuthology.orchestra.run.vm03.stdout:Transaction Summary 2026-03-10T08:44:45.832 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-10T08:44:45.832 INFO:teuthology.orchestra.run.vm03.stdout:Remove 4 Packages 2026-03-10T08:44:45.832 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:44:45.832 INFO:teuthology.orchestra.run.vm03.stdout:Freed space: 212 M 2026-03-10T08:44:45.832 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction check 2026-03-10T08:44:45.834 INFO:teuthology.orchestra.run.vm03.stdout:Transaction check succeeded. 2026-03-10T08:44:45.834 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction test 2026-03-10T08:44:45.836 INFO:teuthology.orchestra.run.vm06.stdout:Dependencies resolved. 2026-03-10T08:44:45.836 INFO:teuthology.orchestra.run.vm06.stdout:================================================================================ 2026-03-10T08:44:45.836 INFO:teuthology.orchestra.run.vm06.stdout: Package Arch Version Repository Size 2026-03-10T08:44:45.836 INFO:teuthology.orchestra.run.vm06.stdout:================================================================================ 2026-03-10T08:44:45.836 INFO:teuthology.orchestra.run.vm06.stdout:Removing: 2026-03-10T08:44:45.836 INFO:teuthology.orchestra.run.vm06.stdout: ceph-test x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 210 M 2026-03-10T08:44:45.836 INFO:teuthology.orchestra.run.vm06.stdout:Removing unused dependencies: 2026-03-10T08:44:45.836 INFO:teuthology.orchestra.run.vm06.stdout: libxslt x86_64 1.1.34-12.el9 @appstream 743 k 2026-03-10T08:44:45.837 INFO:teuthology.orchestra.run.vm06.stdout: socat x86_64 1.7.4.1-8.el9 @appstream 1.1 M 2026-03-10T08:44:45.837 INFO:teuthology.orchestra.run.vm06.stdout: xmlstarlet x86_64 1.6.1-20.el9 @appstream 195 k 2026-03-10T08:44:45.837 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:44:45.837 INFO:teuthology.orchestra.run.vm06.stdout:Transaction Summary 2026-03-10T08:44:45.837 INFO:teuthology.orchestra.run.vm06.stdout:================================================================================ 2026-03-10T08:44:45.837 INFO:teuthology.orchestra.run.vm06.stdout:Remove 4 Packages 2026-03-10T08:44:45.837 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:44:45.837 INFO:teuthology.orchestra.run.vm06.stdout:Freed space: 212 M 2026-03-10T08:44:45.837 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction check 2026-03-10T08:44:45.840 INFO:teuthology.orchestra.run.vm06.stdout:Transaction check succeeded. 2026-03-10T08:44:45.840 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction test 2026-03-10T08:44:45.857 INFO:teuthology.orchestra.run.vm03.stdout:Transaction test succeeded. 2026-03-10T08:44:45.857 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction 2026-03-10T08:44:45.862 INFO:teuthology.orchestra.run.vm06.stdout:Transaction test succeeded. 2026-03-10T08:44:45.862 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction 2026-03-10T08:44:45.917 INFO:teuthology.orchestra.run.vm03.stdout: Preparing : 1/1 2026-03-10T08:44:45.922 INFO:teuthology.orchestra.run.vm06.stdout: Preparing : 1/1 2026-03-10T08:44:45.923 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 1/4 2026-03-10T08:44:45.925 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : xmlstarlet-1.6.1-20.el9.x86_64 2/4 2026-03-10T08:44:45.929 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 1/4 2026-03-10T08:44:45.929 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : libxslt-1.1.34-12.el9.x86_64 3/4 2026-03-10T08:44:45.932 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : xmlstarlet-1.6.1-20.el9.x86_64 2/4 2026-03-10T08:44:45.935 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : libxslt-1.1.34-12.el9.x86_64 3/4 2026-03-10T08:44:45.945 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-10T08:44:45.949 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-10T08:44:46.008 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-10T08:44:46.009 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 1/4 2026-03-10T08:44:46.009 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 2/4 2026-03-10T08:44:46.009 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 3/4 2026-03-10T08:44:46.014 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-10T08:44:46.014 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 1/4 2026-03-10T08:44:46.014 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 2/4 2026-03-10T08:44:46.014 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 3/4 2026-03-10T08:44:46.055 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 4/4 2026-03-10T08:44:46.055 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:44:46.055 INFO:teuthology.orchestra.run.vm06.stdout:Removed: 2026-03-10T08:44:46.055 INFO:teuthology.orchestra.run.vm06.stdout: ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 libxslt-1.1.34-12.el9.x86_64 2026-03-10T08:44:46.056 INFO:teuthology.orchestra.run.vm06.stdout: socat-1.7.4.1-8.el9.x86_64 xmlstarlet-1.6.1-20.el9.x86_64 2026-03-10T08:44:46.056 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:44:46.056 INFO:teuthology.orchestra.run.vm06.stdout:Complete! 2026-03-10T08:44:46.064 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 4/4 2026-03-10T08:44:46.064 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:44:46.064 INFO:teuthology.orchestra.run.vm03.stdout:Removed: 2026-03-10T08:44:46.064 INFO:teuthology.orchestra.run.vm03.stdout: ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 libxslt-1.1.34-12.el9.x86_64 2026-03-10T08:44:46.064 INFO:teuthology.orchestra.run.vm03.stdout: socat-1.7.4.1-8.el9.x86_64 xmlstarlet-1.6.1-20.el9.x86_64 2026-03-10T08:44:46.064 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:44:46.064 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-10T08:44:46.247 INFO:teuthology.orchestra.run.vm06.stdout:Dependencies resolved. 2026-03-10T08:44:46.248 INFO:teuthology.orchestra.run.vm06.stdout:================================================================================ 2026-03-10T08:44:46.248 INFO:teuthology.orchestra.run.vm06.stdout: Package Arch Version Repository Size 2026-03-10T08:44:46.248 INFO:teuthology.orchestra.run.vm06.stdout:================================================================================ 2026-03-10T08:44:46.248 INFO:teuthology.orchestra.run.vm06.stdout:Removing: 2026-03-10T08:44:46.248 INFO:teuthology.orchestra.run.vm06.stdout: ceph x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 0 2026-03-10T08:44:46.248 INFO:teuthology.orchestra.run.vm06.stdout:Removing unused dependencies: 2026-03-10T08:44:46.248 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mds x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 7.5 M 2026-03-10T08:44:46.248 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mon x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 18 M 2026-03-10T08:44:46.248 INFO:teuthology.orchestra.run.vm06.stdout: lua x86_64 5.4.4-4.el9 @appstream 593 k 2026-03-10T08:44:46.248 INFO:teuthology.orchestra.run.vm06.stdout: lua-devel x86_64 5.4.4-4.el9 @crb 49 k 2026-03-10T08:44:46.248 INFO:teuthology.orchestra.run.vm06.stdout: luarocks noarch 3.9.2-5.el9 @epel 692 k 2026-03-10T08:44:46.248 INFO:teuthology.orchestra.run.vm06.stdout: unzip x86_64 6.0-59.el9 @baseos 389 k 2026-03-10T08:44:46.248 INFO:teuthology.orchestra.run.vm06.stdout: zip x86_64 3.0-35.el9 @baseos 724 k 2026-03-10T08:44:46.248 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:44:46.248 INFO:teuthology.orchestra.run.vm06.stdout:Transaction Summary 2026-03-10T08:44:46.248 INFO:teuthology.orchestra.run.vm06.stdout:================================================================================ 2026-03-10T08:44:46.248 INFO:teuthology.orchestra.run.vm06.stdout:Remove 8 Packages 2026-03-10T08:44:46.248 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:44:46.248 INFO:teuthology.orchestra.run.vm06.stdout:Freed space: 28 M 2026-03-10T08:44:46.248 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction check 2026-03-10T08:44:46.251 INFO:teuthology.orchestra.run.vm06.stdout:Transaction check succeeded. 2026-03-10T08:44:46.251 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction test 2026-03-10T08:44:46.275 INFO:teuthology.orchestra.run.vm06.stdout:Transaction test succeeded. 2026-03-10T08:44:46.275 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction 2026-03-10T08:44:46.293 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-10T08:44:46.293 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-10T08:44:46.293 INFO:teuthology.orchestra.run.vm03.stdout: Package Arch Version Repository Size 2026-03-10T08:44:46.293 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-10T08:44:46.293 INFO:teuthology.orchestra.run.vm03.stdout:Removing: 2026-03-10T08:44:46.293 INFO:teuthology.orchestra.run.vm03.stdout: ceph x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 0 2026-03-10T08:44:46.293 INFO:teuthology.orchestra.run.vm03.stdout:Removing unused dependencies: 2026-03-10T08:44:46.293 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mds x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 7.5 M 2026-03-10T08:44:46.293 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mon x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 18 M 2026-03-10T08:44:46.293 INFO:teuthology.orchestra.run.vm03.stdout: lua x86_64 5.4.4-4.el9 @appstream 593 k 2026-03-10T08:44:46.293 INFO:teuthology.orchestra.run.vm03.stdout: lua-devel x86_64 5.4.4-4.el9 @crb 49 k 2026-03-10T08:44:46.293 INFO:teuthology.orchestra.run.vm03.stdout: luarocks noarch 3.9.2-5.el9 @epel 692 k 2026-03-10T08:44:46.293 INFO:teuthology.orchestra.run.vm03.stdout: unzip x86_64 6.0-59.el9 @baseos 389 k 2026-03-10T08:44:46.293 INFO:teuthology.orchestra.run.vm03.stdout: zip x86_64 3.0-35.el9 @baseos 724 k 2026-03-10T08:44:46.293 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:44:46.293 INFO:teuthology.orchestra.run.vm03.stdout:Transaction Summary 2026-03-10T08:44:46.293 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-10T08:44:46.293 INFO:teuthology.orchestra.run.vm03.stdout:Remove 8 Packages 2026-03-10T08:44:46.293 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:44:46.293 INFO:teuthology.orchestra.run.vm03.stdout:Freed space: 28 M 2026-03-10T08:44:46.293 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction check 2026-03-10T08:44:46.293 INFO:teuthology.orchestra.run.vm03.stdout:Transaction check succeeded. 2026-03-10T08:44:46.293 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction test 2026-03-10T08:44:46.314 INFO:teuthology.orchestra.run.vm03.stdout:Transaction test succeeded. 2026-03-10T08:44:46.314 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction 2026-03-10T08:44:46.316 INFO:teuthology.orchestra.run.vm06.stdout: Preparing : 1/1 2026-03-10T08:44:46.321 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/8 2026-03-10T08:44:46.325 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : luarocks-3.9.2-5.el9.noarch 2/8 2026-03-10T08:44:46.327 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : lua-devel-5.4.4-4.el9.x86_64 3/8 2026-03-10T08:44:46.329 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : zip-3.0-35.el9.x86_64 4/8 2026-03-10T08:44:46.332 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : unzip-6.0-59.el9.x86_64 5/8 2026-03-10T08:44:46.334 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : lua-5.4.4-4.el9.x86_64 6/8 2026-03-10T08:44:46.355 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-10T08:44:46.355 INFO:teuthology.orchestra.run.vm06.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T08:44:46.355 INFO:teuthology.orchestra.run.vm06.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-10T08:44:46.355 INFO:teuthology.orchestra.run.vm06.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mds.target". 2026-03-10T08:44:46.355 INFO:teuthology.orchestra.run.vm06.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mds.target". 2026-03-10T08:44:46.355 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:44:46.355 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-10T08:44:46.362 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-10T08:44:46.364 INFO:teuthology.orchestra.run.vm03.stdout: Preparing : 1/1 2026-03-10T08:44:46.369 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/8 2026-03-10T08:44:46.373 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : luarocks-3.9.2-5.el9.noarch 2/8 2026-03-10T08:44:46.375 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : lua-devel-5.4.4-4.el9.x86_64 3/8 2026-03-10T08:44:46.378 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : zip-3.0-35.el9.x86_64 4/8 2026-03-10T08:44:46.380 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : unzip-6.0-59.el9.x86_64 5/8 2026-03-10T08:44:46.382 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : lua-5.4.4-4.el9.x86_64 6/8 2026-03-10T08:44:46.384 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-10T08:44:46.384 INFO:teuthology.orchestra.run.vm06.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T08:44:46.384 INFO:teuthology.orchestra.run.vm06.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-10T08:44:46.384 INFO:teuthology.orchestra.run.vm06.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mon.target". 2026-03-10T08:44:46.384 INFO:teuthology.orchestra.run.vm06.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mon.target". 2026-03-10T08:44:46.384 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:44:46.386 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-10T08:44:46.412 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-10T08:44:46.412 INFO:teuthology.orchestra.run.vm03.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T08:44:46.412 INFO:teuthology.orchestra.run.vm03.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-10T08:44:46.412 INFO:teuthology.orchestra.run.vm03.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mds.target". 2026-03-10T08:44:46.412 INFO:teuthology.orchestra.run.vm03.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mds.target". 2026-03-10T08:44:46.412 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:44:46.412 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-10T08:44:46.419 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-10T08:44:46.440 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-10T08:44:46.440 INFO:teuthology.orchestra.run.vm03.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T08:44:46.440 INFO:teuthology.orchestra.run.vm03.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-10T08:44:46.440 INFO:teuthology.orchestra.run.vm03.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mon.target". 2026-03-10T08:44:46.440 INFO:teuthology.orchestra.run.vm03.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mon.target". 2026-03-10T08:44:46.440 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:44:46.442 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-10T08:44:46.483 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-10T08:44:46.483 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/8 2026-03-10T08:44:46.483 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2/8 2026-03-10T08:44:46.483 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 3/8 2026-03-10T08:44:46.483 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : lua-5.4.4-4.el9.x86_64 4/8 2026-03-10T08:44:46.484 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 5/8 2026-03-10T08:44:46.484 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 6/8 2026-03-10T08:44:46.484 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : unzip-6.0-59.el9.x86_64 7/8 2026-03-10T08:44:46.533 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-10T08:44:46.533 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/8 2026-03-10T08:44:46.533 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2/8 2026-03-10T08:44:46.533 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 3/8 2026-03-10T08:44:46.533 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : lua-5.4.4-4.el9.x86_64 4/8 2026-03-10T08:44:46.533 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 5/8 2026-03-10T08:44:46.533 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 6/8 2026-03-10T08:44:46.533 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : unzip-6.0-59.el9.x86_64 7/8 2026-03-10T08:44:46.537 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : zip-3.0-35.el9.x86_64 8/8 2026-03-10T08:44:46.537 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:44:46.537 INFO:teuthology.orchestra.run.vm06.stdout:Removed: 2026-03-10T08:44:46.537 INFO:teuthology.orchestra.run.vm06.stdout: ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:44:46.537 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:44:46.537 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:44:46.538 INFO:teuthology.orchestra.run.vm06.stdout: lua-5.4.4-4.el9.x86_64 2026-03-10T08:44:46.538 INFO:teuthology.orchestra.run.vm06.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-10T08:44:46.538 INFO:teuthology.orchestra.run.vm06.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-10T08:44:46.538 INFO:teuthology.orchestra.run.vm06.stdout: unzip-6.0-59.el9.x86_64 2026-03-10T08:44:46.538 INFO:teuthology.orchestra.run.vm06.stdout: zip-3.0-35.el9.x86_64 2026-03-10T08:44:46.538 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:44:46.538 INFO:teuthology.orchestra.run.vm06.stdout:Complete! 2026-03-10T08:44:46.613 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : zip-3.0-35.el9.x86_64 8/8 2026-03-10T08:44:46.613 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:44:46.613 INFO:teuthology.orchestra.run.vm03.stdout:Removed: 2026-03-10T08:44:46.613 INFO:teuthology.orchestra.run.vm03.stdout: ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:44:46.613 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:44:46.613 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:44:46.613 INFO:teuthology.orchestra.run.vm03.stdout: lua-5.4.4-4.el9.x86_64 2026-03-10T08:44:46.613 INFO:teuthology.orchestra.run.vm03.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-10T08:44:46.613 INFO:teuthology.orchestra.run.vm03.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-10T08:44:46.613 INFO:teuthology.orchestra.run.vm03.stdout: unzip-6.0-59.el9.x86_64 2026-03-10T08:44:46.614 INFO:teuthology.orchestra.run.vm03.stdout: zip-3.0-35.el9.x86_64 2026-03-10T08:44:46.614 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:44:46.614 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-10T08:44:46.743 INFO:teuthology.orchestra.run.vm06.stdout:Dependencies resolved. 2026-03-10T08:44:46.749 INFO:teuthology.orchestra.run.vm06.stdout:=========================================================================================== 2026-03-10T08:44:46.749 INFO:teuthology.orchestra.run.vm06.stdout: Package Arch Version Repository Size 2026-03-10T08:44:46.749 INFO:teuthology.orchestra.run.vm06.stdout:=========================================================================================== 2026-03-10T08:44:46.749 INFO:teuthology.orchestra.run.vm06.stdout:Removing: 2026-03-10T08:44:46.749 INFO:teuthology.orchestra.run.vm06.stdout: ceph-base x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 23 M 2026-03-10T08:44:46.749 INFO:teuthology.orchestra.run.vm06.stdout:Removing dependent packages: 2026-03-10T08:44:46.749 INFO:teuthology.orchestra.run.vm06.stdout: ceph-immutable-object-cache x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 431 k 2026-03-10T08:44:46.749 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.4 M 2026-03-10T08:44:46.749 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-cephadm noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 806 k 2026-03-10T08:44:46.749 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-dashboard noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 88 M 2026-03-10T08:44:46.749 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-diskprediction-local noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 66 M 2026-03-10T08:44:46.749 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-rook noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 563 k 2026-03-10T08:44:46.749 INFO:teuthology.orchestra.run.vm06.stdout: ceph-osd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 59 M 2026-03-10T08:44:46.749 INFO:teuthology.orchestra.run.vm06.stdout: ceph-volume noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 1.4 M 2026-03-10T08:44:46.749 INFO:teuthology.orchestra.run.vm06.stdout: rbd-mirror x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 13 M 2026-03-10T08:44:46.749 INFO:teuthology.orchestra.run.vm06.stdout:Removing unused dependencies: 2026-03-10T08:44:46.749 INFO:teuthology.orchestra.run.vm06.stdout: abseil-cpp x86_64 20211102.0-4.el9 @epel 1.9 M 2026-03-10T08:44:46.749 INFO:teuthology.orchestra.run.vm06.stdout: ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 85 M 2026-03-10T08:44:46.749 INFO:teuthology.orchestra.run.vm06.stdout: ceph-grafana-dashboards noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 628 k 2026-03-10T08:44:46.749 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-modules-core noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 1.5 M 2026-03-10T08:44:46.749 INFO:teuthology.orchestra.run.vm06.stdout: ceph-prometheus-alerts noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 52 k 2026-03-10T08:44:46.749 INFO:teuthology.orchestra.run.vm06.stdout: ceph-selinux x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 138 k 2026-03-10T08:44:46.749 INFO:teuthology.orchestra.run.vm06.stdout: cryptsetup x86_64 2.8.1-3.el9 @baseos 770 k 2026-03-10T08:44:46.749 INFO:teuthology.orchestra.run.vm06.stdout: flexiblas x86_64 3.0.4-9.el9 @appstream 68 k 2026-03-10T08:44:46.749 INFO:teuthology.orchestra.run.vm06.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 @appstream 11 M 2026-03-10T08:44:46.749 INFO:teuthology.orchestra.run.vm06.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 @appstream 39 k 2026-03-10T08:44:46.749 INFO:teuthology.orchestra.run.vm06.stdout: gperftools-libs x86_64 2.9.1-3.el9 @epel 1.4 M 2026-03-10T08:44:46.749 INFO:teuthology.orchestra.run.vm06.stdout: grpc-data noarch 1.46.7-10.el9 @epel 13 k 2026-03-10T08:44:46.749 INFO:teuthology.orchestra.run.vm06.stdout: ledmon-libs x86_64 1.1.0-3.el9 @baseos 80 k 2026-03-10T08:44:46.749 INFO:teuthology.orchestra.run.vm06.stdout: libcephsqlite x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 425 k 2026-03-10T08:44:46.749 INFO:teuthology.orchestra.run.vm06.stdout: libconfig x86_64 1.7.2-9.el9 @baseos 220 k 2026-03-10T08:44:46.749 INFO:teuthology.orchestra.run.vm06.stdout: libgfortran x86_64 11.5.0-14.el9 @baseos 2.8 M 2026-03-10T08:44:46.749 INFO:teuthology.orchestra.run.vm06.stdout: liboath x86_64 2.6.12-1.el9 @epel 94 k 2026-03-10T08:44:46.749 INFO:teuthology.orchestra.run.vm06.stdout: libquadmath x86_64 11.5.0-14.el9 @baseos 330 k 2026-03-10T08:44:46.749 INFO:teuthology.orchestra.run.vm06.stdout: libradosstriper1 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.6 M 2026-03-10T08:44:46.749 INFO:teuthology.orchestra.run.vm06.stdout: libstoragemgmt x86_64 1.10.1-1.el9 @appstream 685 k 2026-03-10T08:44:46.749 INFO:teuthology.orchestra.run.vm06.stdout: libunwind x86_64 1.6.2-1.el9 @epel 170 k 2026-03-10T08:44:46.749 INFO:teuthology.orchestra.run.vm06.stdout: openblas x86_64 0.3.29-1.el9 @appstream 112 k 2026-03-10T08:44:46.749 INFO:teuthology.orchestra.run.vm06.stdout: openblas-openmp x86_64 0.3.29-1.el9 @appstream 46 M 2026-03-10T08:44:46.749 INFO:teuthology.orchestra.run.vm06.stdout: pciutils x86_64 3.7.0-7.el9 @baseos 216 k 2026-03-10T08:44:46.749 INFO:teuthology.orchestra.run.vm06.stdout: protobuf x86_64 3.14.0-17.el9 @appstream 3.5 M 2026-03-10T08:44:46.749 INFO:teuthology.orchestra.run.vm06.stdout: protobuf-compiler x86_64 3.14.0-17.el9 @crb 2.9 M 2026-03-10T08:44:46.749 INFO:teuthology.orchestra.run.vm06.stdout: python3-asyncssh noarch 2.13.2-5.el9 @epel 3.9 M 2026-03-10T08:44:46.749 INFO:teuthology.orchestra.run.vm06.stdout: python3-autocommand noarch 2.2.2-8.el9 @epel 82 k 2026-03-10T08:44:46.749 INFO:teuthology.orchestra.run.vm06.stdout: python3-babel noarch 2.9.1-2.el9 @appstream 27 M 2026-03-10T08:44:46.749 INFO:teuthology.orchestra.run.vm06.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 @epel 254 k 2026-03-10T08:44:46.749 INFO:teuthology.orchestra.run.vm06.stdout: python3-bcrypt x86_64 3.2.2-1.el9 @epel 87 k 2026-03-10T08:44:46.749 INFO:teuthology.orchestra.run.vm06.stdout: python3-cachetools noarch 4.2.4-1.el9 @epel 93 k 2026-03-10T08:44:46.749 INFO:teuthology.orchestra.run.vm06.stdout: python3-ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 702 k 2026-03-10T08:44:46.749 INFO:teuthology.orchestra.run.vm06.stdout: python3-certifi noarch 2023.05.07-4.el9 @epel 6.3 k 2026-03-10T08:44:46.749 INFO:teuthology.orchestra.run.vm06.stdout: python3-cffi x86_64 1.14.5-5.el9 @baseos 1.0 M 2026-03-10T08:44:46.749 INFO:teuthology.orchestra.run.vm06.stdout: python3-chardet noarch 4.0.0-5.el9 @anaconda 1.4 M 2026-03-10T08:44:46.750 INFO:teuthology.orchestra.run.vm06.stdout: python3-cheroot noarch 10.0.1-4.el9 @epel 682 k 2026-03-10T08:44:46.750 INFO:teuthology.orchestra.run.vm06.stdout: python3-cherrypy noarch 18.6.1-2.el9 @epel 1.1 M 2026-03-10T08:44:46.750 INFO:teuthology.orchestra.run.vm06.stdout: python3-cryptography x86_64 36.0.1-5.el9 @baseos 4.5 M 2026-03-10T08:44:46.750 INFO:teuthology.orchestra.run.vm06.stdout: python3-devel x86_64 3.9.25-3.el9 @appstream 765 k 2026-03-10T08:44:46.750 INFO:teuthology.orchestra.run.vm06.stdout: python3-google-auth noarch 1:2.45.0-1.el9 @epel 1.4 M 2026-03-10T08:44:46.750 INFO:teuthology.orchestra.run.vm06.stdout: python3-grpcio x86_64 1.46.7-10.el9 @epel 6.7 M 2026-03-10T08:44:46.750 INFO:teuthology.orchestra.run.vm06.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 @epel 418 k 2026-03-10T08:44:46.750 INFO:teuthology.orchestra.run.vm06.stdout: python3-idna noarch 2.10-7.el9.1 @anaconda 513 k 2026-03-10T08:44:46.750 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco noarch 8.2.1-3.el9 @epel 3.7 k 2026-03-10T08:44:46.750 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 @epel 24 k 2026-03-10T08:44:46.750 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 @epel 55 k 2026-03-10T08:44:46.750 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco-context noarch 6.0.1-3.el9 @epel 31 k 2026-03-10T08:44:46.750 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 @epel 33 k 2026-03-10T08:44:46.750 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco-text noarch 4.0.0-2.el9 @epel 51 k 2026-03-10T08:44:46.750 INFO:teuthology.orchestra.run.vm06.stdout: python3-jinja2 noarch 2.11.3-8.el9 @appstream 1.1 M 2026-03-10T08:44:46.750 INFO:teuthology.orchestra.run.vm06.stdout: python3-jsonpatch noarch 1.21-16.el9 @koji-override-0 55 k 2026-03-10T08:44:46.750 INFO:teuthology.orchestra.run.vm06.stdout: python3-jsonpointer noarch 2.0-4.el9 @koji-override-0 34 k 2026-03-10T08:44:46.750 INFO:teuthology.orchestra.run.vm06.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 @epel 21 M 2026-03-10T08:44:46.750 INFO:teuthology.orchestra.run.vm06.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 @appstream 832 k 2026-03-10T08:44:46.750 INFO:teuthology.orchestra.run.vm06.stdout: python3-logutils noarch 0.3.5-21.el9 @epel 126 k 2026-03-10T08:44:46.750 INFO:teuthology.orchestra.run.vm06.stdout: python3-mako noarch 1.1.4-6.el9 @appstream 534 k 2026-03-10T08:44:46.750 INFO:teuthology.orchestra.run.vm06.stdout: python3-markupsafe x86_64 1.1.1-12.el9 @appstream 60 k 2026-03-10T08:44:46.750 INFO:teuthology.orchestra.run.vm06.stdout: python3-more-itertools noarch 8.12.0-2.el9 @epel 378 k 2026-03-10T08:44:46.750 INFO:teuthology.orchestra.run.vm06.stdout: python3-natsort noarch 7.1.1-5.el9 @epel 215 k 2026-03-10T08:44:46.750 INFO:teuthology.orchestra.run.vm06.stdout: python3-numpy x86_64 1:1.23.5-2.el9 @appstream 30 M 2026-03-10T08:44:46.750 INFO:teuthology.orchestra.run.vm06.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 @appstream 1.7 M 2026-03-10T08:44:46.750 INFO:teuthology.orchestra.run.vm06.stdout: python3-oauthlib noarch 3.1.1-5.el9 @koji-override-0 888 k 2026-03-10T08:44:46.750 INFO:teuthology.orchestra.run.vm06.stdout: python3-pecan noarch 1.4.2-3.el9 @epel 1.3 M 2026-03-10T08:44:46.750 INFO:teuthology.orchestra.run.vm06.stdout: python3-ply noarch 3.11-14.el9 @baseos 430 k 2026-03-10T08:44:46.750 INFO:teuthology.orchestra.run.vm06.stdout: python3-portend noarch 3.1.0-2.el9 @epel 20 k 2026-03-10T08:44:46.750 INFO:teuthology.orchestra.run.vm06.stdout: python3-prettytable noarch 0.7.2-27.el9 @koji-override-0 166 k 2026-03-10T08:44:46.750 INFO:teuthology.orchestra.run.vm06.stdout: python3-protobuf noarch 3.14.0-17.el9 @appstream 1.4 M 2026-03-10T08:44:46.750 INFO:teuthology.orchestra.run.vm06.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 @epel 389 k 2026-03-10T08:44:46.750 INFO:teuthology.orchestra.run.vm06.stdout: python3-pyasn1 noarch 0.4.8-7.el9 @appstream 622 k 2026-03-10T08:44:46.750 INFO:teuthology.orchestra.run.vm06.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 @appstream 1.0 M 2026-03-10T08:44:46.750 INFO:teuthology.orchestra.run.vm06.stdout: python3-pycparser noarch 2.20-6.el9 @baseos 745 k 2026-03-10T08:44:46.750 INFO:teuthology.orchestra.run.vm06.stdout: python3-pysocks noarch 1.7.1-12.el9 @anaconda 88 k 2026-03-10T08:44:46.750 INFO:teuthology.orchestra.run.vm06.stdout: python3-pytz noarch 2021.1-5.el9 @koji-override-0 176 k 2026-03-10T08:44:46.750 INFO:teuthology.orchestra.run.vm06.stdout: python3-repoze-lru noarch 0.7-16.el9 @epel 83 k 2026-03-10T08:44:46.750 INFO:teuthology.orchestra.run.vm06.stdout: python3-requests noarch 2.25.1-10.el9 @baseos 405 k 2026-03-10T08:44:46.750 INFO:teuthology.orchestra.run.vm06.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 @appstream 119 k 2026-03-10T08:44:46.750 INFO:teuthology.orchestra.run.vm06.stdout: python3-routes noarch 2.5.1-5.el9 @epel 459 k 2026-03-10T08:44:46.750 INFO:teuthology.orchestra.run.vm06.stdout: python3-rsa noarch 4.9-2.el9 @epel 202 k 2026-03-10T08:44:46.750 INFO:teuthology.orchestra.run.vm06.stdout: python3-scipy x86_64 1.9.3-2.el9 @appstream 76 M 2026-03-10T08:44:46.750 INFO:teuthology.orchestra.run.vm06.stdout: python3-tempora noarch 5.0.0-2.el9 @epel 96 k 2026-03-10T08:44:46.750 INFO:teuthology.orchestra.run.vm06.stdout: python3-typing-extensions noarch 4.15.0-1.el9 @epel 447 k 2026-03-10T08:44:46.750 INFO:teuthology.orchestra.run.vm06.stdout: python3-urllib3 noarch 1.26.5-7.el9 @baseos 746 k 2026-03-10T08:44:46.750 INFO:teuthology.orchestra.run.vm06.stdout: python3-webob noarch 1.8.8-2.el9 @epel 1.2 M 2026-03-10T08:44:46.750 INFO:teuthology.orchestra.run.vm06.stdout: python3-websocket-client noarch 1.2.3-2.el9 @epel 319 k 2026-03-10T08:44:46.750 INFO:teuthology.orchestra.run.vm06.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 @epel 1.9 M 2026-03-10T08:44:46.750 INFO:teuthology.orchestra.run.vm06.stdout: python3-zc-lockfile noarch 2.0-10.el9 @epel 35 k 2026-03-10T08:44:46.750 INFO:teuthology.orchestra.run.vm06.stdout: qatlib x86_64 25.08.0-2.el9 @appstream 639 k 2026-03-10T08:44:46.750 INFO:teuthology.orchestra.run.vm06.stdout: qatlib-service x86_64 25.08.0-2.el9 @appstream 69 k 2026-03-10T08:44:46.750 INFO:teuthology.orchestra.run.vm06.stdout: qatzip-libs x86_64 1.3.1-1.el9 @appstream 148 k 2026-03-10T08:44:46.750 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:44:46.750 INFO:teuthology.orchestra.run.vm06.stdout:Transaction Summary 2026-03-10T08:44:46.750 INFO:teuthology.orchestra.run.vm06.stdout:=========================================================================================== 2026-03-10T08:44:46.750 INFO:teuthology.orchestra.run.vm06.stdout:Remove 100 Packages 2026-03-10T08:44:46.750 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:44:46.751 INFO:teuthology.orchestra.run.vm06.stdout:Freed space: 612 M 2026-03-10T08:44:46.751 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction check 2026-03-10T08:44:46.775 INFO:teuthology.orchestra.run.vm06.stdout:Transaction check succeeded. 2026-03-10T08:44:46.775 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction test 2026-03-10T08:44:46.880 INFO:teuthology.orchestra.run.vm06.stdout:Transaction test succeeded. 2026-03-10T08:44:46.880 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction 2026-03-10T08:44:46.922 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-10T08:44:46.929 INFO:teuthology.orchestra.run.vm03.stdout:=========================================================================================== 2026-03-10T08:44:46.929 INFO:teuthology.orchestra.run.vm03.stdout: Package Arch Version Repository Size 2026-03-10T08:44:46.929 INFO:teuthology.orchestra.run.vm03.stdout:=========================================================================================== 2026-03-10T08:44:46.929 INFO:teuthology.orchestra.run.vm03.stdout:Removing: 2026-03-10T08:44:46.929 INFO:teuthology.orchestra.run.vm03.stdout: ceph-base x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 23 M 2026-03-10T08:44:46.929 INFO:teuthology.orchestra.run.vm03.stdout:Removing dependent packages: 2026-03-10T08:44:46.929 INFO:teuthology.orchestra.run.vm03.stdout: ceph-immutable-object-cache x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 431 k 2026-03-10T08:44:46.929 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.4 M 2026-03-10T08:44:46.929 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-cephadm noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 806 k 2026-03-10T08:44:46.929 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-dashboard noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 88 M 2026-03-10T08:44:46.929 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-diskprediction-local noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 66 M 2026-03-10T08:44:46.929 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-rook noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 563 k 2026-03-10T08:44:46.929 INFO:teuthology.orchestra.run.vm03.stdout: ceph-osd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 59 M 2026-03-10T08:44:46.929 INFO:teuthology.orchestra.run.vm03.stdout: ceph-volume noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 1.4 M 2026-03-10T08:44:46.929 INFO:teuthology.orchestra.run.vm03.stdout: rbd-mirror x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 13 M 2026-03-10T08:44:46.929 INFO:teuthology.orchestra.run.vm03.stdout:Removing unused dependencies: 2026-03-10T08:44:46.929 INFO:teuthology.orchestra.run.vm03.stdout: abseil-cpp x86_64 20211102.0-4.el9 @epel 1.9 M 2026-03-10T08:44:46.929 INFO:teuthology.orchestra.run.vm03.stdout: ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 85 M 2026-03-10T08:44:46.929 INFO:teuthology.orchestra.run.vm03.stdout: ceph-grafana-dashboards noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 628 k 2026-03-10T08:44:46.929 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 1.5 M 2026-03-10T08:44:46.929 INFO:teuthology.orchestra.run.vm03.stdout: ceph-prometheus-alerts noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 52 k 2026-03-10T08:44:46.929 INFO:teuthology.orchestra.run.vm03.stdout: ceph-selinux x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 138 k 2026-03-10T08:44:46.929 INFO:teuthology.orchestra.run.vm03.stdout: cryptsetup x86_64 2.8.1-3.el9 @baseos 770 k 2026-03-10T08:44:46.929 INFO:teuthology.orchestra.run.vm03.stdout: flexiblas x86_64 3.0.4-9.el9 @appstream 68 k 2026-03-10T08:44:46.929 INFO:teuthology.orchestra.run.vm03.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 @appstream 11 M 2026-03-10T08:44:46.929 INFO:teuthology.orchestra.run.vm03.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 @appstream 39 k 2026-03-10T08:44:46.929 INFO:teuthology.orchestra.run.vm03.stdout: gperftools-libs x86_64 2.9.1-3.el9 @epel 1.4 M 2026-03-10T08:44:46.929 INFO:teuthology.orchestra.run.vm03.stdout: grpc-data noarch 1.46.7-10.el9 @epel 13 k 2026-03-10T08:44:46.930 INFO:teuthology.orchestra.run.vm03.stdout: ledmon-libs x86_64 1.1.0-3.el9 @baseos 80 k 2026-03-10T08:44:46.930 INFO:teuthology.orchestra.run.vm03.stdout: libcephsqlite x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 425 k 2026-03-10T08:44:46.930 INFO:teuthology.orchestra.run.vm03.stdout: libconfig x86_64 1.7.2-9.el9 @baseos 220 k 2026-03-10T08:44:46.930 INFO:teuthology.orchestra.run.vm03.stdout: libgfortran x86_64 11.5.0-14.el9 @baseos 2.8 M 2026-03-10T08:44:46.930 INFO:teuthology.orchestra.run.vm03.stdout: liboath x86_64 2.6.12-1.el9 @epel 94 k 2026-03-10T08:44:46.930 INFO:teuthology.orchestra.run.vm03.stdout: libquadmath x86_64 11.5.0-14.el9 @baseos 330 k 2026-03-10T08:44:46.930 INFO:teuthology.orchestra.run.vm03.stdout: libradosstriper1 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.6 M 2026-03-10T08:44:46.930 INFO:teuthology.orchestra.run.vm03.stdout: libstoragemgmt x86_64 1.10.1-1.el9 @appstream 685 k 2026-03-10T08:44:46.930 INFO:teuthology.orchestra.run.vm03.stdout: libunwind x86_64 1.6.2-1.el9 @epel 170 k 2026-03-10T08:44:46.930 INFO:teuthology.orchestra.run.vm03.stdout: openblas x86_64 0.3.29-1.el9 @appstream 112 k 2026-03-10T08:44:46.930 INFO:teuthology.orchestra.run.vm03.stdout: openblas-openmp x86_64 0.3.29-1.el9 @appstream 46 M 2026-03-10T08:44:46.930 INFO:teuthology.orchestra.run.vm03.stdout: pciutils x86_64 3.7.0-7.el9 @baseos 216 k 2026-03-10T08:44:46.930 INFO:teuthology.orchestra.run.vm03.stdout: protobuf x86_64 3.14.0-17.el9 @appstream 3.5 M 2026-03-10T08:44:46.930 INFO:teuthology.orchestra.run.vm03.stdout: protobuf-compiler x86_64 3.14.0-17.el9 @crb 2.9 M 2026-03-10T08:44:46.930 INFO:teuthology.orchestra.run.vm03.stdout: python3-asyncssh noarch 2.13.2-5.el9 @epel 3.9 M 2026-03-10T08:44:46.930 INFO:teuthology.orchestra.run.vm03.stdout: python3-autocommand noarch 2.2.2-8.el9 @epel 82 k 2026-03-10T08:44:46.930 INFO:teuthology.orchestra.run.vm03.stdout: python3-babel noarch 2.9.1-2.el9 @appstream 27 M 2026-03-10T08:44:46.930 INFO:teuthology.orchestra.run.vm03.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 @epel 254 k 2026-03-10T08:44:46.930 INFO:teuthology.orchestra.run.vm03.stdout: python3-bcrypt x86_64 3.2.2-1.el9 @epel 87 k 2026-03-10T08:44:46.930 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools noarch 4.2.4-1.el9 @epel 93 k 2026-03-10T08:44:46.930 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 702 k 2026-03-10T08:44:46.930 INFO:teuthology.orchestra.run.vm03.stdout: python3-certifi noarch 2023.05.07-4.el9 @epel 6.3 k 2026-03-10T08:44:46.930 INFO:teuthology.orchestra.run.vm03.stdout: python3-cffi x86_64 1.14.5-5.el9 @baseos 1.0 M 2026-03-10T08:44:46.930 INFO:teuthology.orchestra.run.vm03.stdout: python3-chardet noarch 4.0.0-5.el9 @anaconda 1.4 M 2026-03-10T08:44:46.930 INFO:teuthology.orchestra.run.vm03.stdout: python3-cheroot noarch 10.0.1-4.el9 @epel 682 k 2026-03-10T08:44:46.930 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy noarch 18.6.1-2.el9 @epel 1.1 M 2026-03-10T08:44:46.930 INFO:teuthology.orchestra.run.vm03.stdout: python3-cryptography x86_64 36.0.1-5.el9 @baseos 4.5 M 2026-03-10T08:44:46.930 INFO:teuthology.orchestra.run.vm03.stdout: python3-devel x86_64 3.9.25-3.el9 @appstream 765 k 2026-03-10T08:44:46.930 INFO:teuthology.orchestra.run.vm03.stdout: python3-google-auth noarch 1:2.45.0-1.el9 @epel 1.4 M 2026-03-10T08:44:46.930 INFO:teuthology.orchestra.run.vm03.stdout: python3-grpcio x86_64 1.46.7-10.el9 @epel 6.7 M 2026-03-10T08:44:46.930 INFO:teuthology.orchestra.run.vm03.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 @epel 418 k 2026-03-10T08:44:46.930 INFO:teuthology.orchestra.run.vm03.stdout: python3-idna noarch 2.10-7.el9.1 @anaconda 513 k 2026-03-10T08:44:46.930 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco noarch 8.2.1-3.el9 @epel 3.7 k 2026-03-10T08:44:46.930 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 @epel 24 k 2026-03-10T08:44:46.930 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 @epel 55 k 2026-03-10T08:44:46.930 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-context noarch 6.0.1-3.el9 @epel 31 k 2026-03-10T08:44:46.930 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 @epel 33 k 2026-03-10T08:44:46.930 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-text noarch 4.0.0-2.el9 @epel 51 k 2026-03-10T08:44:46.931 INFO:teuthology.orchestra.run.vm03.stdout: python3-jinja2 noarch 2.11.3-8.el9 @appstream 1.1 M 2026-03-10T08:44:46.931 INFO:teuthology.orchestra.run.vm03.stdout: python3-jsonpatch noarch 1.21-16.el9 @koji-override-0 55 k 2026-03-10T08:44:46.931 INFO:teuthology.orchestra.run.vm03.stdout: python3-jsonpointer noarch 2.0-4.el9 @koji-override-0 34 k 2026-03-10T08:44:46.931 INFO:teuthology.orchestra.run.vm03.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 @epel 21 M 2026-03-10T08:44:46.931 INFO:teuthology.orchestra.run.vm03.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 @appstream 832 k 2026-03-10T08:44:46.931 INFO:teuthology.orchestra.run.vm03.stdout: python3-logutils noarch 0.3.5-21.el9 @epel 126 k 2026-03-10T08:44:46.931 INFO:teuthology.orchestra.run.vm03.stdout: python3-mako noarch 1.1.4-6.el9 @appstream 534 k 2026-03-10T08:44:46.931 INFO:teuthology.orchestra.run.vm03.stdout: python3-markupsafe x86_64 1.1.1-12.el9 @appstream 60 k 2026-03-10T08:44:46.931 INFO:teuthology.orchestra.run.vm03.stdout: python3-more-itertools noarch 8.12.0-2.el9 @epel 378 k 2026-03-10T08:44:46.931 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort noarch 7.1.1-5.el9 @epel 215 k 2026-03-10T08:44:46.931 INFO:teuthology.orchestra.run.vm03.stdout: python3-numpy x86_64 1:1.23.5-2.el9 @appstream 30 M 2026-03-10T08:44:46.931 INFO:teuthology.orchestra.run.vm03.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 @appstream 1.7 M 2026-03-10T08:44:46.931 INFO:teuthology.orchestra.run.vm03.stdout: python3-oauthlib noarch 3.1.1-5.el9 @koji-override-0 888 k 2026-03-10T08:44:46.931 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan noarch 1.4.2-3.el9 @epel 1.3 M 2026-03-10T08:44:46.931 INFO:teuthology.orchestra.run.vm03.stdout: python3-ply noarch 3.11-14.el9 @baseos 430 k 2026-03-10T08:44:46.931 INFO:teuthology.orchestra.run.vm03.stdout: python3-portend noarch 3.1.0-2.el9 @epel 20 k 2026-03-10T08:44:46.931 INFO:teuthology.orchestra.run.vm03.stdout: python3-prettytable noarch 0.7.2-27.el9 @koji-override-0 166 k 2026-03-10T08:44:46.931 INFO:teuthology.orchestra.run.vm03.stdout: python3-protobuf noarch 3.14.0-17.el9 @appstream 1.4 M 2026-03-10T08:44:46.931 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 @epel 389 k 2026-03-10T08:44:46.931 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyasn1 noarch 0.4.8-7.el9 @appstream 622 k 2026-03-10T08:44:46.931 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 @appstream 1.0 M 2026-03-10T08:44:46.931 INFO:teuthology.orchestra.run.vm03.stdout: python3-pycparser noarch 2.20-6.el9 @baseos 745 k 2026-03-10T08:44:46.931 INFO:teuthology.orchestra.run.vm03.stdout: python3-pysocks noarch 1.7.1-12.el9 @anaconda 88 k 2026-03-10T08:44:46.931 INFO:teuthology.orchestra.run.vm03.stdout: python3-pytz noarch 2021.1-5.el9 @koji-override-0 176 k 2026-03-10T08:44:46.931 INFO:teuthology.orchestra.run.vm03.stdout: python3-repoze-lru noarch 0.7-16.el9 @epel 83 k 2026-03-10T08:44:46.931 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests noarch 2.25.1-10.el9 @baseos 405 k 2026-03-10T08:44:46.931 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 @appstream 119 k 2026-03-10T08:44:46.931 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes noarch 2.5.1-5.el9 @epel 459 k 2026-03-10T08:44:46.931 INFO:teuthology.orchestra.run.vm03.stdout: python3-rsa noarch 4.9-2.el9 @epel 202 k 2026-03-10T08:44:46.931 INFO:teuthology.orchestra.run.vm03.stdout: python3-scipy x86_64 1.9.3-2.el9 @appstream 76 M 2026-03-10T08:44:46.931 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora noarch 5.0.0-2.el9 @epel 96 k 2026-03-10T08:44:46.931 INFO:teuthology.orchestra.run.vm03.stdout: python3-typing-extensions noarch 4.15.0-1.el9 @epel 447 k 2026-03-10T08:44:46.931 INFO:teuthology.orchestra.run.vm03.stdout: python3-urllib3 noarch 1.26.5-7.el9 @baseos 746 k 2026-03-10T08:44:46.931 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob noarch 1.8.8-2.el9 @epel 1.2 M 2026-03-10T08:44:46.931 INFO:teuthology.orchestra.run.vm03.stdout: python3-websocket-client noarch 1.2.3-2.el9 @epel 319 k 2026-03-10T08:44:46.931 INFO:teuthology.orchestra.run.vm03.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 @epel 1.9 M 2026-03-10T08:44:46.931 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc-lockfile noarch 2.0-10.el9 @epel 35 k 2026-03-10T08:44:46.931 INFO:teuthology.orchestra.run.vm03.stdout: qatlib x86_64 25.08.0-2.el9 @appstream 639 k 2026-03-10T08:44:46.931 INFO:teuthology.orchestra.run.vm03.stdout: qatlib-service x86_64 25.08.0-2.el9 @appstream 69 k 2026-03-10T08:44:46.931 INFO:teuthology.orchestra.run.vm03.stdout: qatzip-libs x86_64 1.3.1-1.el9 @appstream 148 k 2026-03-10T08:44:46.931 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:44:46.931 INFO:teuthology.orchestra.run.vm03.stdout:Transaction Summary 2026-03-10T08:44:46.931 INFO:teuthology.orchestra.run.vm03.stdout:=========================================================================================== 2026-03-10T08:44:46.931 INFO:teuthology.orchestra.run.vm03.stdout:Remove 100 Packages 2026-03-10T08:44:46.931 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:44:46.932 INFO:teuthology.orchestra.run.vm03.stdout:Freed space: 612 M 2026-03-10T08:44:46.932 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction check 2026-03-10T08:44:46.957 INFO:teuthology.orchestra.run.vm03.stdout:Transaction check succeeded. 2026-03-10T08:44:46.957 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction test 2026-03-10T08:44:47.020 INFO:teuthology.orchestra.run.vm06.stdout: Preparing : 1/1 2026-03-10T08:44:47.020 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 1/100 2026-03-10T08:44:47.027 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 1/100 2026-03-10T08:44:47.044 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/100 2026-03-10T08:44:47.044 INFO:teuthology.orchestra.run.vm06.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T08:44:47.044 INFO:teuthology.orchestra.run.vm06.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-10T08:44:47.044 INFO:teuthology.orchestra.run.vm06.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mgr.target". 2026-03-10T08:44:47.044 INFO:teuthology.orchestra.run.vm06.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mgr.target". 2026-03-10T08:44:47.044 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:44:47.044 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/100 2026-03-10T08:44:47.057 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/100 2026-03-10T08:44:47.062 INFO:teuthology.orchestra.run.vm03.stdout:Transaction test succeeded. 2026-03-10T08:44:47.062 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction 2026-03-10T08:44:47.081 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 3/100 2026-03-10T08:44:47.081 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 4/100 2026-03-10T08:44:47.136 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 4/100 2026-03-10T08:44:47.143 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-kubernetes-1:26.1.0-3.el9.noarch 5/100 2026-03-10T08:44:47.147 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-requests-oauthlib-1.3.0-12.el9.noarch 6/100 2026-03-10T08:44:47.147 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/100 2026-03-10T08:44:47.158 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/100 2026-03-10T08:44:47.165 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-cherrypy-18.6.1-2.el9.noarch 8/100 2026-03-10T08:44:47.169 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-cheroot-10.0.1-4.el9.noarch 9/100 2026-03-10T08:44:47.178 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-grpcio-tools-1.46.7-10.el9.x86_64 10/100 2026-03-10T08:44:47.182 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-grpcio-1.46.7-10.el9.x86_64 11/100 2026-03-10T08:44:47.203 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/100 2026-03-10T08:44:47.203 INFO:teuthology.orchestra.run.vm06.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T08:44:47.203 INFO:teuthology.orchestra.run.vm06.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-10T08:44:47.203 INFO:teuthology.orchestra.run.vm06.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-osd.target". 2026-03-10T08:44:47.203 INFO:teuthology.orchestra.run.vm06.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-osd.target". 2026-03-10T08:44:47.203 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:44:47.204 INFO:teuthology.orchestra.run.vm03.stdout: Preparing : 1/1 2026-03-10T08:44:47.204 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 1/100 2026-03-10T08:44:47.208 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/100 2026-03-10T08:44:47.211 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 1/100 2026-03-10T08:44:47.217 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/100 2026-03-10T08:44:47.230 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/100 2026-03-10T08:44:47.230 INFO:teuthology.orchestra.run.vm03.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T08:44:47.230 INFO:teuthology.orchestra.run.vm03.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-10T08:44:47.230 INFO:teuthology.orchestra.run.vm03.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mgr.target". 2026-03-10T08:44:47.230 INFO:teuthology.orchestra.run.vm03.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mgr.target". 2026-03-10T08:44:47.230 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:44:47.231 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/100 2026-03-10T08:44:47.232 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/100 2026-03-10T08:44:47.232 INFO:teuthology.orchestra.run.vm06.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T08:44:47.233 INFO:teuthology.orchestra.run.vm06.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-10T08:44:47.233 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:44:47.241 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/100 2026-03-10T08:44:47.243 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/100 2026-03-10T08:44:47.250 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/100 2026-03-10T08:44:47.252 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-jaraco-collections-3.0.0-8.el9.noarch 14/100 2026-03-10T08:44:47.257 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-jaraco-text-4.0.0-2.el9.noarch 15/100 2026-03-10T08:44:47.262 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-jinja2-2.11.3-8.el9.noarch 16/100 2026-03-10T08:44:47.266 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 3/100 2026-03-10T08:44:47.266 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 4/100 2026-03-10T08:44:47.270 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-requests-2.25.1-10.el9.noarch 17/100 2026-03-10T08:44:47.282 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-google-auth-1:2.45.0-1.el9.noarch 18/100 2026-03-10T08:44:47.287 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-pecan-1.4.2-3.el9.noarch 19/100 2026-03-10T08:44:47.297 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-rsa-4.9-2.el9.noarch 20/100 2026-03-10T08:44:47.303 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-pyasn1-modules-0.4.8-7.el9.noarch 21/100 2026-03-10T08:44:47.320 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 4/100 2026-03-10T08:44:47.329 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-kubernetes-1:26.1.0-3.el9.noarch 5/100 2026-03-10T08:44:47.332 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-urllib3-1.26.5-7.el9.noarch 22/100 2026-03-10T08:44:47.333 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-requests-oauthlib-1.3.0-12.el9.noarch 6/100 2026-03-10T08:44:47.333 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/100 2026-03-10T08:44:47.340 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-babel-2.9.1-2.el9.noarch 23/100 2026-03-10T08:44:47.343 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-jaraco-classes-3.2.1-5.el9.noarch 24/100 2026-03-10T08:44:47.345 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/100 2026-03-10T08:44:47.351 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-pyOpenSSL-21.0.0-1.el9.noarch 25/100 2026-03-10T08:44:47.352 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-cherrypy-18.6.1-2.el9.noarch 8/100 2026-03-10T08:44:47.356 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-cheroot-10.0.1-4.el9.noarch 9/100 2026-03-10T08:44:47.362 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-asyncssh-2.13.2-5.el9.noarch 26/100 2026-03-10T08:44:47.362 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 27/100 2026-03-10T08:44:47.364 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-grpcio-tools-1.46.7-10.el9.x86_64 10/100 2026-03-10T08:44:47.368 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-grpcio-1.46.7-10.el9.x86_64 11/100 2026-03-10T08:44:47.369 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 27/100 2026-03-10T08:44:47.387 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/100 2026-03-10T08:44:47.387 INFO:teuthology.orchestra.run.vm03.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T08:44:47.387 INFO:teuthology.orchestra.run.vm03.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-10T08:44:47.387 INFO:teuthology.orchestra.run.vm03.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-osd.target". 2026-03-10T08:44:47.387 INFO:teuthology.orchestra.run.vm03.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-osd.target". 2026-03-10T08:44:47.387 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:44:47.392 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/100 2026-03-10T08:44:47.400 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/100 2026-03-10T08:44:47.414 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/100 2026-03-10T08:44:47.414 INFO:teuthology.orchestra.run.vm03.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T08:44:47.414 INFO:teuthology.orchestra.run.vm03.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-10T08:44:47.414 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:44:47.423 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/100 2026-03-10T08:44:47.433 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/100 2026-03-10T08:44:47.435 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-jaraco-collections-3.0.0-8.el9.noarch 14/100 2026-03-10T08:44:47.440 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-jaraco-text-4.0.0-2.el9.noarch 15/100 2026-03-10T08:44:47.444 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-jinja2-2.11.3-8.el9.noarch 16/100 2026-03-10T08:44:47.453 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-requests-2.25.1-10.el9.noarch 17/100 2026-03-10T08:44:47.465 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-google-auth-1:2.45.0-1.el9.noarch 18/100 2026-03-10T08:44:47.467 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-jsonpatch-1.21-16.el9.noarch 28/100 2026-03-10T08:44:47.471 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-pecan-1.4.2-3.el9.noarch 19/100 2026-03-10T08:44:47.481 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-rsa-4.9-2.el9.noarch 20/100 2026-03-10T08:44:47.484 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-scipy-1.9.3-2.el9.x86_64 29/100 2026-03-10T08:44:47.487 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-pyasn1-modules-0.4.8-7.el9.noarch 21/100 2026-03-10T08:44:47.499 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 30/100 2026-03-10T08:44:47.499 INFO:teuthology.orchestra.run.vm06.stdout:Removed "/etc/systemd/system/multi-user.target.wants/libstoragemgmt.service". 2026-03-10T08:44:47.499 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:44:47.500 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : libstoragemgmt-1.10.1-1.el9.x86_64 30/100 2026-03-10T08:44:47.517 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-urllib3-1.26.5-7.el9.noarch 22/100 2026-03-10T08:44:47.523 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-babel-2.9.1-2.el9.noarch 23/100 2026-03-10T08:44:47.526 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-jaraco-classes-3.2.1-5.el9.noarch 24/100 2026-03-10T08:44:47.526 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 30/100 2026-03-10T08:44:47.536 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-pyOpenSSL-21.0.0-1.el9.noarch 25/100 2026-03-10T08:44:47.542 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 31/100 2026-03-10T08:44:47.547 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-asyncssh-2.13.2-5.el9.noarch 26/100 2026-03-10T08:44:47.548 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 27/100 2026-03-10T08:44:47.548 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-cryptography-36.0.1-5.el9.x86_64 32/100 2026-03-10T08:44:47.551 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : protobuf-compiler-3.14.0-17.el9.x86_64 33/100 2026-03-10T08:44:47.553 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-bcrypt-3.2.2-1.el9.x86_64 34/100 2026-03-10T08:44:47.555 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 27/100 2026-03-10T08:44:47.574 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/100 2026-03-10T08:44:47.574 INFO:teuthology.orchestra.run.vm06.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T08:44:47.574 INFO:teuthology.orchestra.run.vm06.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-10T08:44:47.574 INFO:teuthology.orchestra.run.vm06.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target". 2026-03-10T08:44:47.574 INFO:teuthology.orchestra.run.vm06.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target". 2026-03-10T08:44:47.574 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:44:47.575 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/100 2026-03-10T08:44:47.586 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/100 2026-03-10T08:44:47.591 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-mako-1.1.4-6.el9.noarch 36/100 2026-03-10T08:44:47.594 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-jaraco-context-6.0.1-3.el9.noarch 37/100 2026-03-10T08:44:47.596 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-portend-3.1.0-2.el9.noarch 38/100 2026-03-10T08:44:47.599 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-tempora-5.0.0-2.el9.noarch 39/100 2026-03-10T08:44:47.603 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-jaraco-functools-3.5.0-2.el9.noarch 40/100 2026-03-10T08:44:47.607 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-routes-2.5.1-5.el9.noarch 41/100 2026-03-10T08:44:47.611 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-cffi-1.14.5-5.el9.x86_64 42/100 2026-03-10T08:44:47.648 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-jsonpatch-1.21-16.el9.noarch 28/100 2026-03-10T08:44:47.657 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-pycparser-2.20-6.el9.noarch 43/100 2026-03-10T08:44:47.665 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-scipy-1.9.3-2.el9.x86_64 29/100 2026-03-10T08:44:47.669 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-numpy-1:1.23.5-2.el9.x86_64 44/100 2026-03-10T08:44:47.672 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : flexiblas-netlib-3.0.4-9.el9.x86_64 45/100 2026-03-10T08:44:47.677 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 46/100 2026-03-10T08:44:47.678 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 30/100 2026-03-10T08:44:47.678 INFO:teuthology.orchestra.run.vm03.stdout:Removed "/etc/systemd/system/multi-user.target.wants/libstoragemgmt.service". 2026-03-10T08:44:47.678 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:44:47.678 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : openblas-openmp-0.3.29-1.el9.x86_64 47/100 2026-03-10T08:44:47.679 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : libstoragemgmt-1.10.1-1.el9.x86_64 30/100 2026-03-10T08:44:47.682 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : libgfortran-11.5.0-14.el9.x86_64 48/100 2026-03-10T08:44:47.684 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 49/100 2026-03-10T08:44:47.704 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 30/100 2026-03-10T08:44:47.706 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/100 2026-03-10T08:44:47.706 INFO:teuthology.orchestra.run.vm06.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T08:44:47.706 INFO:teuthology.orchestra.run.vm06.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-10T08:44:47.706 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:44:47.706 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/100 2026-03-10T08:44:47.713 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/100 2026-03-10T08:44:47.715 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : openblas-0.3.29-1.el9.x86_64 51/100 2026-03-10T08:44:47.717 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : flexiblas-3.0.4-9.el9.x86_64 52/100 2026-03-10T08:44:47.720 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 31/100 2026-03-10T08:44:47.720 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-ply-3.11-14.el9.noarch 53/100 2026-03-10T08:44:47.722 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-repoze-lru-0.7-16.el9.noarch 54/100 2026-03-10T08:44:47.725 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-jaraco-8.2.1-3.el9.noarch 55/100 2026-03-10T08:44:47.726 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-cryptography-36.0.1-5.el9.x86_64 32/100 2026-03-10T08:44:47.728 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-more-itertools-8.12.0-2.el9.noarch 56/100 2026-03-10T08:44:47.729 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : protobuf-compiler-3.14.0-17.el9.x86_64 33/100 2026-03-10T08:44:47.731 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-pytz-2021.1-5.el9.noarch 57/100 2026-03-10T08:44:47.731 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-bcrypt-3.2.2-1.el9.x86_64 34/100 2026-03-10T08:44:47.739 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-backports-tarfile-1.2.0-1.el9.noarch 58/100 2026-03-10T08:44:47.743 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-devel-3.9.25-3.el9.x86_64 59/100 2026-03-10T08:44:47.745 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-jsonpointer-2.0-4.el9.noarch 60/100 2026-03-10T08:44:47.748 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-typing-extensions-4.15.0-1.el9.noarch 61/100 2026-03-10T08:44:47.751 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-idna-2.10-7.el9.1.noarch 62/100 2026-03-10T08:44:47.752 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/100 2026-03-10T08:44:47.752 INFO:teuthology.orchestra.run.vm03.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T08:44:47.752 INFO:teuthology.orchestra.run.vm03.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-10T08:44:47.752 INFO:teuthology.orchestra.run.vm03.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target". 2026-03-10T08:44:47.752 INFO:teuthology.orchestra.run.vm03.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target". 2026-03-10T08:44:47.752 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:44:47.753 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/100 2026-03-10T08:44:47.757 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-pysocks-1.7.1-12.el9.noarch 63/100 2026-03-10T08:44:47.761 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-pyasn1-0.4.8-7.el9.noarch 64/100 2026-03-10T08:44:47.764 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/100 2026-03-10T08:44:47.766 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-logutils-0.3.5-21.el9.noarch 65/100 2026-03-10T08:44:47.768 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-mako-1.1.4-6.el9.noarch 36/100 2026-03-10T08:44:47.770 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-jaraco-context-6.0.1-3.el9.noarch 37/100 2026-03-10T08:44:47.771 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-webob-1.8.8-2.el9.noarch 66/100 2026-03-10T08:44:47.773 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-portend-3.1.0-2.el9.noarch 38/100 2026-03-10T08:44:47.776 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-tempora-5.0.0-2.el9.noarch 39/100 2026-03-10T08:44:47.777 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-cachetools-4.2.4-1.el9.noarch 67/100 2026-03-10T08:44:47.780 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-jaraco-functools-3.5.0-2.el9.noarch 40/100 2026-03-10T08:44:47.780 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-chardet-4.0.0-5.el9.noarch 68/100 2026-03-10T08:44:47.782 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-autocommand-2.2.2-8.el9.noarch 69/100 2026-03-10T08:44:47.783 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-routes-2.5.1-5.el9.noarch 41/100 2026-03-10T08:44:47.788 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-cffi-1.14.5-5.el9.x86_64 42/100 2026-03-10T08:44:47.788 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : grpc-data-1.46.7-10.el9.noarch 70/100 2026-03-10T08:44:47.791 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-protobuf-3.14.0-17.el9.noarch 71/100 2026-03-10T08:44:47.795 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-zc-lockfile-2.0-10.el9.noarch 72/100 2026-03-10T08:44:47.806 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-natsort-7.1.1-5.el9.noarch 73/100 2026-03-10T08:44:47.810 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-oauthlib-3.1.1-5.el9.noarch 74/100 2026-03-10T08:44:47.813 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-websocket-client-1.2.3-2.el9.noarch 75/100 2026-03-10T08:44:47.815 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-certifi-2023.05.07-4.el9.noarch 76/100 2026-03-10T08:44:47.817 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 77/100 2026-03-10T08:44:47.822 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 78/100 2026-03-10T08:44:47.826 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-werkzeug-2.0.3-3.el9.1.noarch 79/100 2026-03-10T08:44:47.835 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-pycparser-2.20-6.el9.noarch 43/100 2026-03-10T08:44:47.846 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 80/100 2026-03-10T08:44:47.846 INFO:teuthology.orchestra.run.vm06.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-crash.service". 2026-03-10T08:44:47.846 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:44:47.847 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-numpy-1:1.23.5-2.el9.x86_64 44/100 2026-03-10T08:44:47.849 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : flexiblas-netlib-3.0.4-9.el9.x86_64 45/100 2026-03-10T08:44:47.853 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 80/100 2026-03-10T08:44:47.854 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 46/100 2026-03-10T08:44:47.856 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : openblas-openmp-0.3.29-1.el9.x86_64 47/100 2026-03-10T08:44:47.859 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : libgfortran-11.5.0-14.el9.x86_64 48/100 2026-03-10T08:44:47.862 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 49/100 2026-03-10T08:44:47.883 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 80/100 2026-03-10T08:44:47.883 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 81/100 2026-03-10T08:44:47.884 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/100 2026-03-10T08:44:47.884 INFO:teuthology.orchestra.run.vm03.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T08:44:47.884 INFO:teuthology.orchestra.run.vm03.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-10T08:44:47.884 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:44:47.884 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/100 2026-03-10T08:44:47.891 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/100 2026-03-10T08:44:47.893 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : openblas-0.3.29-1.el9.x86_64 51/100 2026-03-10T08:44:47.895 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : flexiblas-3.0.4-9.el9.x86_64 52/100 2026-03-10T08:44:47.896 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 81/100 2026-03-10T08:44:47.898 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-ply-3.11-14.el9.noarch 53/100 2026-03-10T08:44:47.900 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-repoze-lru-0.7-16.el9.noarch 54/100 2026-03-10T08:44:47.900 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : qatzip-libs-1.3.1-1.el9.x86_64 82/100 2026-03-10T08:44:47.903 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-jaraco-8.2.1-3.el9.noarch 55/100 2026-03-10T08:44:47.903 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 83/100 2026-03-10T08:44:47.905 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-prettytable-0.7.2-27.el9.noarch 84/100 2026-03-10T08:44:47.905 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 85/100 2026-03-10T08:44:47.905 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-more-itertools-8.12.0-2.el9.noarch 56/100 2026-03-10T08:44:47.908 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-pytz-2021.1-5.el9.noarch 57/100 2026-03-10T08:44:47.915 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-backports-tarfile-1.2.0-1.el9.noarch 58/100 2026-03-10T08:44:47.919 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-devel-3.9.25-3.el9.x86_64 59/100 2026-03-10T08:44:47.921 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-jsonpointer-2.0-4.el9.noarch 60/100 2026-03-10T08:44:47.924 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-typing-extensions-4.15.0-1.el9.noarch 61/100 2026-03-10T08:44:47.926 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-idna-2.10-7.el9.1.noarch 62/100 2026-03-10T08:44:47.931 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-pysocks-1.7.1-12.el9.noarch 63/100 2026-03-10T08:44:47.935 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-pyasn1-0.4.8-7.el9.noarch 64/100 2026-03-10T08:44:47.939 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-logutils-0.3.5-21.el9.noarch 65/100 2026-03-10T08:44:47.943 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-webob-1.8.8-2.el9.noarch 66/100 2026-03-10T08:44:47.949 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-cachetools-4.2.4-1.el9.noarch 67/100 2026-03-10T08:44:47.952 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-chardet-4.0.0-5.el9.noarch 68/100 2026-03-10T08:44:47.954 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-autocommand-2.2.2-8.el9.noarch 69/100 2026-03-10T08:44:47.959 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : grpc-data-1.46.7-10.el9.noarch 70/100 2026-03-10T08:44:47.962 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-protobuf-3.14.0-17.el9.noarch 71/100 2026-03-10T08:44:47.965 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-zc-lockfile-2.0-10.el9.noarch 72/100 2026-03-10T08:44:47.973 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-natsort-7.1.1-5.el9.noarch 73/100 2026-03-10T08:44:47.978 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-oauthlib-3.1.1-5.el9.noarch 74/100 2026-03-10T08:44:47.981 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-websocket-client-1.2.3-2.el9.noarch 75/100 2026-03-10T08:44:47.984 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-certifi-2023.05.07-4.el9.noarch 76/100 2026-03-10T08:44:47.985 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 77/100 2026-03-10T08:44:47.990 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 78/100 2026-03-10T08:44:47.994 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-werkzeug-2.0.3-3.el9.1.noarch 79/100 2026-03-10T08:44:48.011 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 80/100 2026-03-10T08:44:48.011 INFO:teuthology.orchestra.run.vm03.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-crash.service". 2026-03-10T08:44:48.011 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:44:48.018 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 80/100 2026-03-10T08:44:48.045 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 80/100 2026-03-10T08:44:48.045 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 81/100 2026-03-10T08:44:48.055 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 81/100 2026-03-10T08:44:48.060 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : qatzip-libs-1.3.1-1.el9.x86_64 82/100 2026-03-10T08:44:48.063 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 83/100 2026-03-10T08:44:48.065 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-prettytable-0.7.2-27.el9.noarch 84/100 2026-03-10T08:44:48.065 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 85/100 2026-03-10T08:44:53.156 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 85/100 2026-03-10T08:44:53.156 INFO:teuthology.orchestra.run.vm06.stdout:skipping the directory /sys 2026-03-10T08:44:53.156 INFO:teuthology.orchestra.run.vm06.stdout:skipping the directory /proc 2026-03-10T08:44:53.156 INFO:teuthology.orchestra.run.vm06.stdout:skipping the directory /mnt 2026-03-10T08:44:53.156 INFO:teuthology.orchestra.run.vm06.stdout:skipping the directory /var/tmp 2026-03-10T08:44:53.156 INFO:teuthology.orchestra.run.vm06.stdout:skipping the directory /home 2026-03-10T08:44:53.156 INFO:teuthology.orchestra.run.vm06.stdout:skipping the directory /root 2026-03-10T08:44:53.156 INFO:teuthology.orchestra.run.vm06.stdout:skipping the directory /tmp 2026-03-10T08:44:53.156 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:44:53.166 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : qatlib-25.08.0-2.el9.x86_64 86/100 2026-03-10T08:44:53.182 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 87/100 2026-03-10T08:44:53.182 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : qatlib-service-25.08.0-2.el9.x86_64 87/100 2026-03-10T08:44:53.190 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 87/100 2026-03-10T08:44:53.192 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : gperftools-libs-2.9.1-3.el9.x86_64 88/100 2026-03-10T08:44:53.195 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : libunwind-1.6.2-1.el9.x86_64 89/100 2026-03-10T08:44:53.197 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : pciutils-3.7.0-7.el9.x86_64 90/100 2026-03-10T08:44:53.199 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : liboath-2.6.12-1.el9.x86_64 91/100 2026-03-10T08:44:53.199 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 92/100 2026-03-10T08:44:53.212 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 92/100 2026-03-10T08:44:53.214 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : ledmon-libs-1.1.0-3.el9.x86_64 93/100 2026-03-10T08:44:53.216 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : libquadmath-11.5.0-14.el9.x86_64 94/100 2026-03-10T08:44:53.219 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-markupsafe-1.1.1-12.el9.x86_64 95/100 2026-03-10T08:44:53.223 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : protobuf-3.14.0-17.el9.x86_64 96/100 2026-03-10T08:44:53.229 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : libconfig-1.7.2-9.el9.x86_64 97/100 2026-03-10T08:44:53.236 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : cryptsetup-2.8.1-3.el9.x86_64 98/100 2026-03-10T08:44:53.241 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : abseil-cpp-20211102.0-4.el9.x86_64 99/100 2026-03-10T08:44:53.241 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 100/100 2026-03-10T08:44:53.295 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 85/100 2026-03-10T08:44:53.295 INFO:teuthology.orchestra.run.vm03.stdout:skipping the directory /sys 2026-03-10T08:44:53.295 INFO:teuthology.orchestra.run.vm03.stdout:skipping the directory /proc 2026-03-10T08:44:53.295 INFO:teuthology.orchestra.run.vm03.stdout:skipping the directory /mnt 2026-03-10T08:44:53.295 INFO:teuthology.orchestra.run.vm03.stdout:skipping the directory /var/tmp 2026-03-10T08:44:53.295 INFO:teuthology.orchestra.run.vm03.stdout:skipping the directory /home 2026-03-10T08:44:53.295 INFO:teuthology.orchestra.run.vm03.stdout:skipping the directory /root 2026-03-10T08:44:53.295 INFO:teuthology.orchestra.run.vm03.stdout:skipping the directory /tmp 2026-03-10T08:44:53.295 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:44:53.306 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : qatlib-25.08.0-2.el9.x86_64 86/100 2026-03-10T08:44:53.326 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 87/100 2026-03-10T08:44:53.326 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : qatlib-service-25.08.0-2.el9.x86_64 87/100 2026-03-10T08:44:53.334 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 87/100 2026-03-10T08:44:53.338 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : gperftools-libs-2.9.1-3.el9.x86_64 88/100 2026-03-10T08:44:53.340 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 100/100 2026-03-10T08:44:53.340 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 1/100 2026-03-10T08:44:53.340 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2/100 2026-03-10T08:44:53.340 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 3/100 2026-03-10T08:44:53.340 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 4/100 2026-03-10T08:44:53.340 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 5/100 2026-03-10T08:44:53.340 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 6/100 2026-03-10T08:44:53.340 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/100 2026-03-10T08:44:53.340 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 8/100 2026-03-10T08:44:53.340 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 9/100 2026-03-10T08:44:53.340 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 10/100 2026-03-10T08:44:53.340 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 11/100 2026-03-10T08:44:53.340 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/100 2026-03-10T08:44:53.340 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 13/100 2026-03-10T08:44:53.340 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : libunwind-1.6.2-1.el9.x86_64 89/100 2026-03-10T08:44:53.341 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 14/100 2026-03-10T08:44:53.341 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 15/100 2026-03-10T08:44:53.341 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : cryptsetup-2.8.1-3.el9.x86_64 16/100 2026-03-10T08:44:53.341 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 17/100 2026-03-10T08:44:53.341 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 18/100 2026-03-10T08:44:53.341 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 19/100 2026-03-10T08:44:53.341 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 20/100 2026-03-10T08:44:53.341 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 21/100 2026-03-10T08:44:53.341 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 22/100 2026-03-10T08:44:53.341 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 23/100 2026-03-10T08:44:53.341 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 24/100 2026-03-10T08:44:53.341 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 25/100 2026-03-10T08:44:53.341 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 26/100 2026-03-10T08:44:53.341 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 27/100 2026-03-10T08:44:53.341 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 28/100 2026-03-10T08:44:53.341 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 29/100 2026-03-10T08:44:53.341 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 30/100 2026-03-10T08:44:53.341 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 31/100 2026-03-10T08:44:53.341 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 32/100 2026-03-10T08:44:53.341 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 33/100 2026-03-10T08:44:53.341 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : protobuf-3.14.0-17.el9.x86_64 34/100 2026-03-10T08:44:53.341 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : protobuf-compiler-3.14.0-17.el9.x86_64 35/100 2026-03-10T08:44:53.341 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 36/100 2026-03-10T08:44:53.341 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 37/100 2026-03-10T08:44:53.341 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 38/100 2026-03-10T08:44:53.342 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 39/100 2026-03-10T08:44:53.342 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 40/100 2026-03-10T08:44:53.342 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 41/100 2026-03-10T08:44:53.342 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 42/100 2026-03-10T08:44:53.342 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 43/100 2026-03-10T08:44:53.342 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 44/100 2026-03-10T08:44:53.342 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-chardet-4.0.0-5.el9.noarch 45/100 2026-03-10T08:44:53.342 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 46/100 2026-03-10T08:44:53.342 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 47/100 2026-03-10T08:44:53.342 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 48/100 2026-03-10T08:44:53.342 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 49/100 2026-03-10T08:44:53.342 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 50/100 2026-03-10T08:44:53.342 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 51/100 2026-03-10T08:44:53.342 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 52/100 2026-03-10T08:44:53.342 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-idna-2.10-7.el9.1.noarch 53/100 2026-03-10T08:44:53.342 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 54/100 2026-03-10T08:44:53.342 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 55/100 2026-03-10T08:44:53.342 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 56/100 2026-03-10T08:44:53.342 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 57/100 2026-03-10T08:44:53.342 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 58/100 2026-03-10T08:44:53.342 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 59/100 2026-03-10T08:44:53.342 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 60/100 2026-03-10T08:44:53.342 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-jsonpatch-1.21-16.el9.noarch 61/100 2026-03-10T08:44:53.342 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-jsonpointer-2.0-4.el9.noarch 62/100 2026-03-10T08:44:53.343 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 63/100 2026-03-10T08:44:53.343 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 64/100 2026-03-10T08:44:53.343 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 65/100 2026-03-10T08:44:53.343 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 66/100 2026-03-10T08:44:53.343 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 67/100 2026-03-10T08:44:53.343 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 68/100 2026-03-10T08:44:53.343 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 69/100 2026-03-10T08:44:53.343 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 70/100 2026-03-10T08:44:53.343 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 71/100 2026-03-10T08:44:53.343 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-oauthlib-3.1.1-5.el9.noarch 72/100 2026-03-10T08:44:53.343 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 73/100 2026-03-10T08:44:53.343 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-ply-3.11-14.el9.noarch 74/100 2026-03-10T08:44:53.343 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 75/100 2026-03-10T08:44:53.343 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-prettytable-0.7.2-27.el9.noarch 76/100 2026-03-10T08:44:53.343 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-protobuf-3.14.0-17.el9.noarch 77/100 2026-03-10T08:44:53.343 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 78/100 2026-03-10T08:44:53.343 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 79/100 2026-03-10T08:44:53.343 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 80/100 2026-03-10T08:44:53.343 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 81/100 2026-03-10T08:44:53.343 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-pysocks-1.7.1-12.el9.noarch 82/100 2026-03-10T08:44:53.343 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-pytz-2021.1-5.el9.noarch 83/100 2026-03-10T08:44:53.343 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 84/100 2026-03-10T08:44:53.343 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 85/100 2026-03-10T08:44:53.343 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 86/100 2026-03-10T08:44:53.343 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 87/100 2026-03-10T08:44:53.343 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 88/100 2026-03-10T08:44:53.343 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 89/100 2026-03-10T08:44:53.343 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 90/100 2026-03-10T08:44:53.343 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 91/100 2026-03-10T08:44:53.343 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 92/100 2026-03-10T08:44:53.344 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : pciutils-3.7.0-7.el9.x86_64 90/100 2026-03-10T08:44:53.344 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 93/100 2026-03-10T08:44:53.344 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 94/100 2026-03-10T08:44:53.344 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 95/100 2026-03-10T08:44:53.344 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 96/100 2026-03-10T08:44:53.344 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : qatlib-25.08.0-2.el9.x86_64 97/100 2026-03-10T08:44:53.344 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : qatlib-service-25.08.0-2.el9.x86_64 98/100 2026-03-10T08:44:53.344 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 99/100 2026-03-10T08:44:53.345 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : liboath-2.6.12-1.el9.x86_64 91/100 2026-03-10T08:44:53.345 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 92/100 2026-03-10T08:44:53.360 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 92/100 2026-03-10T08:44:53.362 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ledmon-libs-1.1.0-3.el9.x86_64 93/100 2026-03-10T08:44:53.364 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : libquadmath-11.5.0-14.el9.x86_64 94/100 2026-03-10T08:44:53.368 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-markupsafe-1.1.1-12.el9.x86_64 95/100 2026-03-10T08:44:53.370 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : protobuf-3.14.0-17.el9.x86_64 96/100 2026-03-10T08:44:53.376 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : libconfig-1.7.2-9.el9.x86_64 97/100 2026-03-10T08:44:53.384 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : cryptsetup-2.8.1-3.el9.x86_64 98/100 2026-03-10T08:44:53.390 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : abseil-cpp-20211102.0-4.el9.x86_64 99/100 2026-03-10T08:44:53.390 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 100/100 2026-03-10T08:44:53.417 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 100/100 2026-03-10T08:44:53.417 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:44:53.417 INFO:teuthology.orchestra.run.vm06.stdout:Removed: 2026-03-10T08:44:53.417 INFO:teuthology.orchestra.run.vm06.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-03-10T08:44:53.417 INFO:teuthology.orchestra.run.vm06.stdout: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:44:53.417 INFO:teuthology.orchestra.run.vm06.stdout: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:44:53.417 INFO:teuthology.orchestra.run.vm06.stdout: ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T08:44:53.417 INFO:teuthology.orchestra.run.vm06.stdout: ceph-immutable-object-cache-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:44:53.417 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:44:53.417 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T08:44:53.417 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T08:44:53.417 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T08:44:53.418 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T08:44:53.418 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T08:44:53.418 INFO:teuthology.orchestra.run.vm06.stdout: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:44:53.418 INFO:teuthology.orchestra.run.vm06.stdout: ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T08:44:53.418 INFO:teuthology.orchestra.run.vm06.stdout: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:44:53.418 INFO:teuthology.orchestra.run.vm06.stdout: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T08:44:53.418 INFO:teuthology.orchestra.run.vm06.stdout: cryptsetup-2.8.1-3.el9.x86_64 2026-03-10T08:44:53.418 INFO:teuthology.orchestra.run.vm06.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-10T08:44:53.418 INFO:teuthology.orchestra.run.vm06.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-10T08:44:53.418 INFO:teuthology.orchestra.run.vm06.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-10T08:44:53.418 INFO:teuthology.orchestra.run.vm06.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-10T08:44:53.418 INFO:teuthology.orchestra.run.vm06.stdout: grpc-data-1.46.7-10.el9.noarch 2026-03-10T08:44:53.418 INFO:teuthology.orchestra.run.vm06.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-10T08:44:53.418 INFO:teuthology.orchestra.run.vm06.stdout: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:44:53.418 INFO:teuthology.orchestra.run.vm06.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-10T08:44:53.418 INFO:teuthology.orchestra.run.vm06.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-10T08:44:53.418 INFO:teuthology.orchestra.run.vm06.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-10T08:44:53.418 INFO:teuthology.orchestra.run.vm06.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-10T08:44:53.418 INFO:teuthology.orchestra.run.vm06.stdout: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:44:53.418 INFO:teuthology.orchestra.run.vm06.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T08:44:53.418 INFO:teuthology.orchestra.run.vm06.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-10T08:44:53.418 INFO:teuthology.orchestra.run.vm06.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-10T08:44:53.418 INFO:teuthology.orchestra.run.vm06.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-10T08:44:53.418 INFO:teuthology.orchestra.run.vm06.stdout: pciutils-3.7.0-7.el9.x86_64 2026-03-10T08:44:53.418 INFO:teuthology.orchestra.run.vm06.stdout: protobuf-3.14.0-17.el9.x86_64 2026-03-10T08:44:53.418 INFO:teuthology.orchestra.run.vm06.stdout: protobuf-compiler-3.14.0-17.el9.x86_64 2026-03-10T08:44:53.418 INFO:teuthology.orchestra.run.vm06.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-10T08:44:53.418 INFO:teuthology.orchestra.run.vm06.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-10T08:44:53.418 INFO:teuthology.orchestra.run.vm06.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-10T08:44:53.418 INFO:teuthology.orchestra.run.vm06.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-10T08:44:53.418 INFO:teuthology.orchestra.run.vm06.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-10T08:44:53.418 INFO:teuthology.orchestra.run.vm06.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-10T08:44:53.418 INFO:teuthology.orchestra.run.vm06.stdout: python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:44:53.418 INFO:teuthology.orchestra.run.vm06.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-10T08:44:53.418 INFO:teuthology.orchestra.run.vm06.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-10T08:44:53.418 INFO:teuthology.orchestra.run.vm06.stdout: python3-chardet-4.0.0-5.el9.noarch 2026-03-10T08:44:53.418 INFO:teuthology.orchestra.run.vm06.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-10T08:44:53.418 INFO:teuthology.orchestra.run.vm06.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-10T08:44:53.418 INFO:teuthology.orchestra.run.vm06.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-10T08:44:53.418 INFO:teuthology.orchestra.run.vm06.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-10T08:44:53.418 INFO:teuthology.orchestra.run.vm06.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-10T08:44:53.418 INFO:teuthology.orchestra.run.vm06.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-03-10T08:44:53.418 INFO:teuthology.orchestra.run.vm06.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-03-10T08:44:53.418 INFO:teuthology.orchestra.run.vm06.stdout: python3-idna-2.10-7.el9.1.noarch 2026-03-10T08:44:53.418 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-10T08:44:53.418 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-10T08:44:53.418 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-10T08:44:53.418 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-10T08:44:53.418 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-10T08:44:53.418 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-10T08:44:53.418 INFO:teuthology.orchestra.run.vm06.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-10T08:44:53.418 INFO:teuthology.orchestra.run.vm06.stdout: python3-jsonpatch-1.21-16.el9.noarch 2026-03-10T08:44:53.418 INFO:teuthology.orchestra.run.vm06.stdout: python3-jsonpointer-2.0-4.el9.noarch 2026-03-10T08:44:53.418 INFO:teuthology.orchestra.run.vm06.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-10T08:44:53.418 INFO:teuthology.orchestra.run.vm06.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T08:44:53.418 INFO:teuthology.orchestra.run.vm06.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-10T08:44:53.418 INFO:teuthology.orchestra.run.vm06.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-10T08:44:53.418 INFO:teuthology.orchestra.run.vm06.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-10T08:44:53.418 INFO:teuthology.orchestra.run.vm06.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-10T08:44:53.418 INFO:teuthology.orchestra.run.vm06.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-10T08:44:53.418 INFO:teuthology.orchestra.run.vm06.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-10T08:44:53.419 INFO:teuthology.orchestra.run.vm06.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-10T08:44:53.419 INFO:teuthology.orchestra.run.vm06.stdout: python3-oauthlib-3.1.1-5.el9.noarch 2026-03-10T08:44:53.419 INFO:teuthology.orchestra.run.vm06.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-10T08:44:53.419 INFO:teuthology.orchestra.run.vm06.stdout: python3-ply-3.11-14.el9.noarch 2026-03-10T08:44:53.419 INFO:teuthology.orchestra.run.vm06.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-10T08:44:53.419 INFO:teuthology.orchestra.run.vm06.stdout: python3-prettytable-0.7.2-27.el9.noarch 2026-03-10T08:44:53.419 INFO:teuthology.orchestra.run.vm06.stdout: python3-protobuf-3.14.0-17.el9.noarch 2026-03-10T08:44:53.419 INFO:teuthology.orchestra.run.vm06.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-10T08:44:53.419 INFO:teuthology.orchestra.run.vm06.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-10T08:44:53.419 INFO:teuthology.orchestra.run.vm06.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-10T08:44:53.419 INFO:teuthology.orchestra.run.vm06.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-10T08:44:53.419 INFO:teuthology.orchestra.run.vm06.stdout: python3-pysocks-1.7.1-12.el9.noarch 2026-03-10T08:44:53.419 INFO:teuthology.orchestra.run.vm06.stdout: python3-pytz-2021.1-5.el9.noarch 2026-03-10T08:44:53.419 INFO:teuthology.orchestra.run.vm06.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-10T08:44:53.419 INFO:teuthology.orchestra.run.vm06.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-10T08:44:53.419 INFO:teuthology.orchestra.run.vm06.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-10T08:44:53.419 INFO:teuthology.orchestra.run.vm06.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-10T08:44:53.419 INFO:teuthology.orchestra.run.vm06.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-10T08:44:53.419 INFO:teuthology.orchestra.run.vm06.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-10T08:44:53.419 INFO:teuthology.orchestra.run.vm06.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-10T08:44:53.419 INFO:teuthology.orchestra.run.vm06.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-10T08:44:53.419 INFO:teuthology.orchestra.run.vm06.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-10T08:44:53.419 INFO:teuthology.orchestra.run.vm06.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-10T08:44:53.419 INFO:teuthology.orchestra.run.vm06.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-10T08:44:53.419 INFO:teuthology.orchestra.run.vm06.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-10T08:44:53.419 INFO:teuthology.orchestra.run.vm06.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-10T08:44:53.419 INFO:teuthology.orchestra.run.vm06.stdout: qatlib-25.08.0-2.el9.x86_64 2026-03-10T08:44:53.419 INFO:teuthology.orchestra.run.vm06.stdout: qatlib-service-25.08.0-2.el9.x86_64 2026-03-10T08:44:53.419 INFO:teuthology.orchestra.run.vm06.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-03-10T08:44:53.419 INFO:teuthology.orchestra.run.vm06.stdout: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:44:53.419 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:44:53.419 INFO:teuthology.orchestra.run.vm06.stdout:Complete! 2026-03-10T08:44:53.493 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 100/100 2026-03-10T08:44:53.493 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 1/100 2026-03-10T08:44:53.493 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2/100 2026-03-10T08:44:53.494 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 3/100 2026-03-10T08:44:53.494 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 4/100 2026-03-10T08:44:53.494 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 5/100 2026-03-10T08:44:53.494 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 6/100 2026-03-10T08:44:53.494 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/100 2026-03-10T08:44:53.494 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 8/100 2026-03-10T08:44:53.494 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 9/100 2026-03-10T08:44:53.494 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 10/100 2026-03-10T08:44:53.494 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 11/100 2026-03-10T08:44:53.494 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/100 2026-03-10T08:44:53.494 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 13/100 2026-03-10T08:44:53.494 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 14/100 2026-03-10T08:44:53.494 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 15/100 2026-03-10T08:44:53.494 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : cryptsetup-2.8.1-3.el9.x86_64 16/100 2026-03-10T08:44:53.494 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 17/100 2026-03-10T08:44:53.494 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 18/100 2026-03-10T08:44:53.494 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 19/100 2026-03-10T08:44:53.494 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 20/100 2026-03-10T08:44:53.494 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 21/100 2026-03-10T08:44:53.494 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 22/100 2026-03-10T08:44:53.494 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 23/100 2026-03-10T08:44:53.494 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 24/100 2026-03-10T08:44:53.494 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 25/100 2026-03-10T08:44:53.494 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 26/100 2026-03-10T08:44:53.494 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 27/100 2026-03-10T08:44:53.494 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 28/100 2026-03-10T08:44:53.494 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 29/100 2026-03-10T08:44:53.494 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 30/100 2026-03-10T08:44:53.494 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 31/100 2026-03-10T08:44:53.494 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 32/100 2026-03-10T08:44:53.494 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 33/100 2026-03-10T08:44:53.494 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : protobuf-3.14.0-17.el9.x86_64 34/100 2026-03-10T08:44:53.494 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : protobuf-compiler-3.14.0-17.el9.x86_64 35/100 2026-03-10T08:44:53.494 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 36/100 2026-03-10T08:44:53.494 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 37/100 2026-03-10T08:44:53.494 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 38/100 2026-03-10T08:44:53.494 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 39/100 2026-03-10T08:44:53.494 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 40/100 2026-03-10T08:44:53.494 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 41/100 2026-03-10T08:44:53.494 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 42/100 2026-03-10T08:44:53.494 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 43/100 2026-03-10T08:44:53.494 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 44/100 2026-03-10T08:44:53.494 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-chardet-4.0.0-5.el9.noarch 45/100 2026-03-10T08:44:53.494 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 46/100 2026-03-10T08:44:53.494 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 47/100 2026-03-10T08:44:53.494 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 48/100 2026-03-10T08:44:53.494 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 49/100 2026-03-10T08:44:53.495 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 50/100 2026-03-10T08:44:53.495 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 51/100 2026-03-10T08:44:53.495 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 52/100 2026-03-10T08:44:53.495 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-idna-2.10-7.el9.1.noarch 53/100 2026-03-10T08:44:53.495 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 54/100 2026-03-10T08:44:53.495 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 55/100 2026-03-10T08:44:53.495 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 56/100 2026-03-10T08:44:53.495 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 57/100 2026-03-10T08:44:53.495 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 58/100 2026-03-10T08:44:53.495 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 59/100 2026-03-10T08:44:53.495 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 60/100 2026-03-10T08:44:53.495 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jsonpatch-1.21-16.el9.noarch 61/100 2026-03-10T08:44:53.495 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jsonpointer-2.0-4.el9.noarch 62/100 2026-03-10T08:44:53.495 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 63/100 2026-03-10T08:44:53.495 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 64/100 2026-03-10T08:44:53.495 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 65/100 2026-03-10T08:44:53.495 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 66/100 2026-03-10T08:44:53.495 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 67/100 2026-03-10T08:44:53.495 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 68/100 2026-03-10T08:44:53.495 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 69/100 2026-03-10T08:44:53.495 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 70/100 2026-03-10T08:44:53.495 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 71/100 2026-03-10T08:44:53.495 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-oauthlib-3.1.1-5.el9.noarch 72/100 2026-03-10T08:44:53.495 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 73/100 2026-03-10T08:44:53.495 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-ply-3.11-14.el9.noarch 74/100 2026-03-10T08:44:53.495 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 75/100 2026-03-10T08:44:53.495 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-prettytable-0.7.2-27.el9.noarch 76/100 2026-03-10T08:44:53.495 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-protobuf-3.14.0-17.el9.noarch 77/100 2026-03-10T08:44:53.495 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 78/100 2026-03-10T08:44:53.495 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 79/100 2026-03-10T08:44:53.495 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 80/100 2026-03-10T08:44:53.495 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 81/100 2026-03-10T08:44:53.495 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-pysocks-1.7.1-12.el9.noarch 82/100 2026-03-10T08:44:53.495 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-pytz-2021.1-5.el9.noarch 83/100 2026-03-10T08:44:53.495 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 84/100 2026-03-10T08:44:53.495 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 85/100 2026-03-10T08:44:53.495 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 86/100 2026-03-10T08:44:53.495 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 87/100 2026-03-10T08:44:53.495 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 88/100 2026-03-10T08:44:53.495 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 89/100 2026-03-10T08:44:53.495 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 90/100 2026-03-10T08:44:53.495 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 91/100 2026-03-10T08:44:53.495 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 92/100 2026-03-10T08:44:53.495 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 93/100 2026-03-10T08:44:53.495 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 94/100 2026-03-10T08:44:53.495 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 95/100 2026-03-10T08:44:53.495 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 96/100 2026-03-10T08:44:53.495 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : qatlib-25.08.0-2.el9.x86_64 97/100 2026-03-10T08:44:53.495 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : qatlib-service-25.08.0-2.el9.x86_64 98/100 2026-03-10T08:44:53.495 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 99/100 2026-03-10T08:44:53.571 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 100/100 2026-03-10T08:44:53.571 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:44:53.571 INFO:teuthology.orchestra.run.vm03.stdout:Removed: 2026-03-10T08:44:53.571 INFO:teuthology.orchestra.run.vm03.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-03-10T08:44:53.571 INFO:teuthology.orchestra.run.vm03.stdout: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:44:53.571 INFO:teuthology.orchestra.run.vm03.stdout: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:44:53.571 INFO:teuthology.orchestra.run.vm03.stdout: ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T08:44:53.571 INFO:teuthology.orchestra.run.vm03.stdout: ceph-immutable-object-cache-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:44:53.571 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:44:53.572 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T08:44:53.572 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T08:44:53.572 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T08:44:53.572 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T08:44:53.572 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T08:44:53.572 INFO:teuthology.orchestra.run.vm03.stdout: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:44:53.572 INFO:teuthology.orchestra.run.vm03.stdout: ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T08:44:53.572 INFO:teuthology.orchestra.run.vm03.stdout: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:44:53.572 INFO:teuthology.orchestra.run.vm03.stdout: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T08:44:53.572 INFO:teuthology.orchestra.run.vm03.stdout: cryptsetup-2.8.1-3.el9.x86_64 2026-03-10T08:44:53.572 INFO:teuthology.orchestra.run.vm03.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-10T08:44:53.572 INFO:teuthology.orchestra.run.vm03.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-10T08:44:53.572 INFO:teuthology.orchestra.run.vm03.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-10T08:44:53.572 INFO:teuthology.orchestra.run.vm03.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-10T08:44:53.572 INFO:teuthology.orchestra.run.vm03.stdout: grpc-data-1.46.7-10.el9.noarch 2026-03-10T08:44:53.572 INFO:teuthology.orchestra.run.vm03.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-10T08:44:53.572 INFO:teuthology.orchestra.run.vm03.stdout: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:44:53.572 INFO:teuthology.orchestra.run.vm03.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-10T08:44:53.572 INFO:teuthology.orchestra.run.vm03.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-10T08:44:53.572 INFO:teuthology.orchestra.run.vm03.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-10T08:44:53.572 INFO:teuthology.orchestra.run.vm03.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-10T08:44:53.572 INFO:teuthology.orchestra.run.vm03.stdout: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:44:53.572 INFO:teuthology.orchestra.run.vm03.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T08:44:53.572 INFO:teuthology.orchestra.run.vm03.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-10T08:44:53.572 INFO:teuthology.orchestra.run.vm03.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-10T08:44:53.572 INFO:teuthology.orchestra.run.vm03.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-10T08:44:53.572 INFO:teuthology.orchestra.run.vm03.stdout: pciutils-3.7.0-7.el9.x86_64 2026-03-10T08:44:53.572 INFO:teuthology.orchestra.run.vm03.stdout: protobuf-3.14.0-17.el9.x86_64 2026-03-10T08:44:53.572 INFO:teuthology.orchestra.run.vm03.stdout: protobuf-compiler-3.14.0-17.el9.x86_64 2026-03-10T08:44:53.572 INFO:teuthology.orchestra.run.vm03.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-10T08:44:53.572 INFO:teuthology.orchestra.run.vm03.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-10T08:44:53.572 INFO:teuthology.orchestra.run.vm03.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-10T08:44:53.572 INFO:teuthology.orchestra.run.vm03.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-10T08:44:53.572 INFO:teuthology.orchestra.run.vm03.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-10T08:44:53.572 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-10T08:44:53.572 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:44:53.572 INFO:teuthology.orchestra.run.vm03.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-10T08:44:53.572 INFO:teuthology.orchestra.run.vm03.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-10T08:44:53.572 INFO:teuthology.orchestra.run.vm03.stdout: python3-chardet-4.0.0-5.el9.noarch 2026-03-10T08:44:53.572 INFO:teuthology.orchestra.run.vm03.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-10T08:44:53.572 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-10T08:44:53.572 INFO:teuthology.orchestra.run.vm03.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-10T08:44:53.572 INFO:teuthology.orchestra.run.vm03.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-10T08:44:53.572 INFO:teuthology.orchestra.run.vm03.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-10T08:44:53.572 INFO:teuthology.orchestra.run.vm03.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-03-10T08:44:53.572 INFO:teuthology.orchestra.run.vm03.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-03-10T08:44:53.572 INFO:teuthology.orchestra.run.vm03.stdout: python3-idna-2.10-7.el9.1.noarch 2026-03-10T08:44:53.572 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-10T08:44:53.572 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-10T08:44:53.572 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-10T08:44:53.572 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-10T08:44:53.572 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-10T08:44:53.572 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-10T08:44:53.572 INFO:teuthology.orchestra.run.vm03.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-10T08:44:53.572 INFO:teuthology.orchestra.run.vm03.stdout: python3-jsonpatch-1.21-16.el9.noarch 2026-03-10T08:44:53.572 INFO:teuthology.orchestra.run.vm03.stdout: python3-jsonpointer-2.0-4.el9.noarch 2026-03-10T08:44:53.572 INFO:teuthology.orchestra.run.vm03.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-10T08:44:53.572 INFO:teuthology.orchestra.run.vm03.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T08:44:53.572 INFO:teuthology.orchestra.run.vm03.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-10T08:44:53.572 INFO:teuthology.orchestra.run.vm03.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-10T08:44:53.572 INFO:teuthology.orchestra.run.vm03.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-10T08:44:53.572 INFO:teuthology.orchestra.run.vm03.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-10T08:44:53.572 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-10T08:44:53.573 INFO:teuthology.orchestra.run.vm03.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-10T08:44:53.573 INFO:teuthology.orchestra.run.vm03.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-10T08:44:53.573 INFO:teuthology.orchestra.run.vm03.stdout: python3-oauthlib-3.1.1-5.el9.noarch 2026-03-10T08:44:53.573 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-10T08:44:53.573 INFO:teuthology.orchestra.run.vm03.stdout: python3-ply-3.11-14.el9.noarch 2026-03-10T08:44:53.573 INFO:teuthology.orchestra.run.vm03.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-10T08:44:53.573 INFO:teuthology.orchestra.run.vm03.stdout: python3-prettytable-0.7.2-27.el9.noarch 2026-03-10T08:44:53.573 INFO:teuthology.orchestra.run.vm03.stdout: python3-protobuf-3.14.0-17.el9.noarch 2026-03-10T08:44:53.573 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-10T08:44:53.573 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-10T08:44:53.573 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-10T08:44:53.573 INFO:teuthology.orchestra.run.vm03.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-10T08:44:53.573 INFO:teuthology.orchestra.run.vm03.stdout: python3-pysocks-1.7.1-12.el9.noarch 2026-03-10T08:44:53.573 INFO:teuthology.orchestra.run.vm03.stdout: python3-pytz-2021.1-5.el9.noarch 2026-03-10T08:44:53.573 INFO:teuthology.orchestra.run.vm03.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-10T08:44:53.573 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-10T08:44:53.573 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-10T08:44:53.573 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-10T08:44:53.573 INFO:teuthology.orchestra.run.vm03.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-10T08:44:53.573 INFO:teuthology.orchestra.run.vm03.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-10T08:44:53.573 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-10T08:44:53.573 INFO:teuthology.orchestra.run.vm03.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-10T08:44:53.573 INFO:teuthology.orchestra.run.vm03.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-10T08:44:53.573 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-10T08:44:53.573 INFO:teuthology.orchestra.run.vm03.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-10T08:44:53.573 INFO:teuthology.orchestra.run.vm03.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-10T08:44:53.573 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-10T08:44:53.573 INFO:teuthology.orchestra.run.vm03.stdout: qatlib-25.08.0-2.el9.x86_64 2026-03-10T08:44:53.573 INFO:teuthology.orchestra.run.vm03.stdout: qatlib-service-25.08.0-2.el9.x86_64 2026-03-10T08:44:53.573 INFO:teuthology.orchestra.run.vm03.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-03-10T08:44:53.573 INFO:teuthology.orchestra.run.vm03.stdout: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:44:53.573 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:44:53.573 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-10T08:44:53.620 INFO:teuthology.orchestra.run.vm06.stdout:Dependencies resolved. 2026-03-10T08:44:53.620 INFO:teuthology.orchestra.run.vm06.stdout:================================================================================ 2026-03-10T08:44:53.620 INFO:teuthology.orchestra.run.vm06.stdout: Package Arch Version Repository Size 2026-03-10T08:44:53.620 INFO:teuthology.orchestra.run.vm06.stdout:================================================================================ 2026-03-10T08:44:53.620 INFO:teuthology.orchestra.run.vm06.stdout:Removing: 2026-03-10T08:44:53.620 INFO:teuthology.orchestra.run.vm06.stdout: cephadm noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 775 k 2026-03-10T08:44:53.620 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:44:53.620 INFO:teuthology.orchestra.run.vm06.stdout:Transaction Summary 2026-03-10T08:44:53.620 INFO:teuthology.orchestra.run.vm06.stdout:================================================================================ 2026-03-10T08:44:53.620 INFO:teuthology.orchestra.run.vm06.stdout:Remove 1 Package 2026-03-10T08:44:53.620 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:44:53.620 INFO:teuthology.orchestra.run.vm06.stdout:Freed space: 775 k 2026-03-10T08:44:53.620 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction check 2026-03-10T08:44:53.622 INFO:teuthology.orchestra.run.vm06.stdout:Transaction check succeeded. 2026-03-10T08:44:53.622 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction test 2026-03-10T08:44:53.623 INFO:teuthology.orchestra.run.vm06.stdout:Transaction test succeeded. 2026-03-10T08:44:53.624 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction 2026-03-10T08:44:53.640 INFO:teuthology.orchestra.run.vm06.stdout: Preparing : 1/1 2026-03-10T08:44:53.640 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-10T08:44:53.743 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-10T08:44:53.775 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-10T08:44:53.775 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-10T08:44:53.775 INFO:teuthology.orchestra.run.vm03.stdout: Package Arch Version Repository Size 2026-03-10T08:44:53.775 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-10T08:44:53.775 INFO:teuthology.orchestra.run.vm03.stdout:Removing: 2026-03-10T08:44:53.775 INFO:teuthology.orchestra.run.vm03.stdout: cephadm noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 775 k 2026-03-10T08:44:53.775 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:44:53.775 INFO:teuthology.orchestra.run.vm03.stdout:Transaction Summary 2026-03-10T08:44:53.775 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-10T08:44:53.775 INFO:teuthology.orchestra.run.vm03.stdout:Remove 1 Package 2026-03-10T08:44:53.776 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:44:53.776 INFO:teuthology.orchestra.run.vm03.stdout:Freed space: 775 k 2026-03-10T08:44:53.776 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction check 2026-03-10T08:44:53.777 INFO:teuthology.orchestra.run.vm03.stdout:Transaction check succeeded. 2026-03-10T08:44:53.777 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction test 2026-03-10T08:44:53.779 INFO:teuthology.orchestra.run.vm03.stdout:Transaction test succeeded. 2026-03-10T08:44:53.779 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction 2026-03-10T08:44:53.789 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-10T08:44:53.789 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:44:53.789 INFO:teuthology.orchestra.run.vm06.stdout:Removed: 2026-03-10T08:44:53.789 INFO:teuthology.orchestra.run.vm06.stdout: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T08:44:53.789 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:44:53.789 INFO:teuthology.orchestra.run.vm06.stdout:Complete! 2026-03-10T08:44:53.795 INFO:teuthology.orchestra.run.vm03.stdout: Preparing : 1/1 2026-03-10T08:44:53.795 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-10T08:44:53.894 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-10T08:44:53.935 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-10T08:44:53.935 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:44:53.935 INFO:teuthology.orchestra.run.vm03.stdout:Removed: 2026-03-10T08:44:53.935 INFO:teuthology.orchestra.run.vm03.stdout: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T08:44:53.935 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:44:53.935 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-10T08:44:53.959 INFO:teuthology.orchestra.run.vm06.stdout:No match for argument: ceph-immutable-object-cache 2026-03-10T08:44:53.959 INFO:teuthology.orchestra.run.vm06.stderr:No packages marked for removal. 2026-03-10T08:44:53.962 INFO:teuthology.orchestra.run.vm06.stdout:Dependencies resolved. 2026-03-10T08:44:53.962 INFO:teuthology.orchestra.run.vm06.stdout:Nothing to do. 2026-03-10T08:44:53.962 INFO:teuthology.orchestra.run.vm06.stdout:Complete! 2026-03-10T08:44:54.098 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: ceph-immutable-object-cache 2026-03-10T08:44:54.098 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-03-10T08:44:54.101 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-10T08:44:54.102 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-03-10T08:44:54.102 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-10T08:44:54.116 INFO:teuthology.orchestra.run.vm06.stdout:No match for argument: ceph-mgr 2026-03-10T08:44:54.116 INFO:teuthology.orchestra.run.vm06.stderr:No packages marked for removal. 2026-03-10T08:44:54.119 INFO:teuthology.orchestra.run.vm06.stdout:Dependencies resolved. 2026-03-10T08:44:54.119 INFO:teuthology.orchestra.run.vm06.stdout:Nothing to do. 2026-03-10T08:44:54.119 INFO:teuthology.orchestra.run.vm06.stdout:Complete! 2026-03-10T08:44:54.253 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: ceph-mgr 2026-03-10T08:44:54.253 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-03-10T08:44:54.256 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-10T08:44:54.257 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-03-10T08:44:54.257 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-10T08:44:54.271 INFO:teuthology.orchestra.run.vm06.stdout:No match for argument: ceph-mgr-dashboard 2026-03-10T08:44:54.271 INFO:teuthology.orchestra.run.vm06.stderr:No packages marked for removal. 2026-03-10T08:44:54.274 INFO:teuthology.orchestra.run.vm06.stdout:Dependencies resolved. 2026-03-10T08:44:54.274 INFO:teuthology.orchestra.run.vm06.stdout:Nothing to do. 2026-03-10T08:44:54.274 INFO:teuthology.orchestra.run.vm06.stdout:Complete! 2026-03-10T08:44:54.412 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: ceph-mgr-dashboard 2026-03-10T08:44:54.412 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-03-10T08:44:54.415 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-10T08:44:54.415 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-03-10T08:44:54.415 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-10T08:44:54.437 INFO:teuthology.orchestra.run.vm06.stdout:No match for argument: ceph-mgr-diskprediction-local 2026-03-10T08:44:54.437 INFO:teuthology.orchestra.run.vm06.stderr:No packages marked for removal. 2026-03-10T08:44:54.440 INFO:teuthology.orchestra.run.vm06.stdout:Dependencies resolved. 2026-03-10T08:44:54.440 INFO:teuthology.orchestra.run.vm06.stdout:Nothing to do. 2026-03-10T08:44:54.440 INFO:teuthology.orchestra.run.vm06.stdout:Complete! 2026-03-10T08:44:54.576 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: ceph-mgr-diskprediction-local 2026-03-10T08:44:54.576 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-03-10T08:44:54.579 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-10T08:44:54.580 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-03-10T08:44:54.580 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-10T08:44:54.595 INFO:teuthology.orchestra.run.vm06.stdout:No match for argument: ceph-mgr-rook 2026-03-10T08:44:54.596 INFO:teuthology.orchestra.run.vm06.stderr:No packages marked for removal. 2026-03-10T08:44:54.598 INFO:teuthology.orchestra.run.vm06.stdout:Dependencies resolved. 2026-03-10T08:44:54.599 INFO:teuthology.orchestra.run.vm06.stdout:Nothing to do. 2026-03-10T08:44:54.599 INFO:teuthology.orchestra.run.vm06.stdout:Complete! 2026-03-10T08:44:54.736 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: ceph-mgr-rook 2026-03-10T08:44:54.737 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-03-10T08:44:54.740 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-10T08:44:54.740 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-03-10T08:44:54.740 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-10T08:44:54.755 INFO:teuthology.orchestra.run.vm06.stdout:No match for argument: ceph-mgr-cephadm 2026-03-10T08:44:54.755 INFO:teuthology.orchestra.run.vm06.stderr:No packages marked for removal. 2026-03-10T08:44:54.758 INFO:teuthology.orchestra.run.vm06.stdout:Dependencies resolved. 2026-03-10T08:44:54.758 INFO:teuthology.orchestra.run.vm06.stdout:Nothing to do. 2026-03-10T08:44:54.758 INFO:teuthology.orchestra.run.vm06.stdout:Complete! 2026-03-10T08:44:54.899 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: ceph-mgr-cephadm 2026-03-10T08:44:54.899 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-03-10T08:44:54.902 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-10T08:44:54.902 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-03-10T08:44:54.902 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-10T08:44:54.927 INFO:teuthology.orchestra.run.vm06.stdout:Dependencies resolved. 2026-03-10T08:44:54.927 INFO:teuthology.orchestra.run.vm06.stdout:================================================================================ 2026-03-10T08:44:54.927 INFO:teuthology.orchestra.run.vm06.stdout: Package Arch Version Repository Size 2026-03-10T08:44:54.927 INFO:teuthology.orchestra.run.vm06.stdout:================================================================================ 2026-03-10T08:44:54.927 INFO:teuthology.orchestra.run.vm06.stdout:Removing: 2026-03-10T08:44:54.927 INFO:teuthology.orchestra.run.vm06.stdout: ceph-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.6 M 2026-03-10T08:44:54.927 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:44:54.927 INFO:teuthology.orchestra.run.vm06.stdout:Transaction Summary 2026-03-10T08:44:54.927 INFO:teuthology.orchestra.run.vm06.stdout:================================================================================ 2026-03-10T08:44:54.927 INFO:teuthology.orchestra.run.vm06.stdout:Remove 1 Package 2026-03-10T08:44:54.927 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:44:54.927 INFO:teuthology.orchestra.run.vm06.stdout:Freed space: 3.6 M 2026-03-10T08:44:54.927 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction check 2026-03-10T08:44:54.929 INFO:teuthology.orchestra.run.vm06.stdout:Transaction check succeeded. 2026-03-10T08:44:54.929 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction test 2026-03-10T08:44:54.938 INFO:teuthology.orchestra.run.vm06.stdout:Transaction test succeeded. 2026-03-10T08:44:54.938 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction 2026-03-10T08:44:54.962 INFO:teuthology.orchestra.run.vm06.stdout: Preparing : 1/1 2026-03-10T08:44:54.976 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-10T08:44:55.029 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-10T08:44:55.069 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-10T08:44:55.069 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:44:55.069 INFO:teuthology.orchestra.run.vm06.stdout:Removed: 2026-03-10T08:44:55.069 INFO:teuthology.orchestra.run.vm06.stdout: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:44:55.069 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:44:55.069 INFO:teuthology.orchestra.run.vm06.stdout:Complete! 2026-03-10T08:44:55.074 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-10T08:44:55.074 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-10T08:44:55.075 INFO:teuthology.orchestra.run.vm03.stdout: Package Arch Version Repository Size 2026-03-10T08:44:55.075 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-10T08:44:55.075 INFO:teuthology.orchestra.run.vm03.stdout:Removing: 2026-03-10T08:44:55.075 INFO:teuthology.orchestra.run.vm03.stdout: ceph-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.6 M 2026-03-10T08:44:55.075 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:44:55.075 INFO:teuthology.orchestra.run.vm03.stdout:Transaction Summary 2026-03-10T08:44:55.075 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-10T08:44:55.075 INFO:teuthology.orchestra.run.vm03.stdout:Remove 1 Package 2026-03-10T08:44:55.075 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:44:55.075 INFO:teuthology.orchestra.run.vm03.stdout:Freed space: 3.6 M 2026-03-10T08:44:55.075 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction check 2026-03-10T08:44:55.076 INFO:teuthology.orchestra.run.vm03.stdout:Transaction check succeeded. 2026-03-10T08:44:55.076 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction test 2026-03-10T08:44:55.085 INFO:teuthology.orchestra.run.vm03.stdout:Transaction test succeeded. 2026-03-10T08:44:55.086 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction 2026-03-10T08:44:55.110 INFO:teuthology.orchestra.run.vm03.stdout: Preparing : 1/1 2026-03-10T08:44:55.124 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-10T08:44:55.186 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-10T08:44:55.227 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-10T08:44:55.227 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:44:55.227 INFO:teuthology.orchestra.run.vm03.stdout:Removed: 2026-03-10T08:44:55.227 INFO:teuthology.orchestra.run.vm03.stdout: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:44:55.227 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:44:55.227 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-10T08:44:55.246 INFO:teuthology.orchestra.run.vm06.stdout:No match for argument: ceph-volume 2026-03-10T08:44:55.246 INFO:teuthology.orchestra.run.vm06.stderr:No packages marked for removal. 2026-03-10T08:44:55.249 INFO:teuthology.orchestra.run.vm06.stdout:Dependencies resolved. 2026-03-10T08:44:55.249 INFO:teuthology.orchestra.run.vm06.stdout:Nothing to do. 2026-03-10T08:44:55.250 INFO:teuthology.orchestra.run.vm06.stdout:Complete! 2026-03-10T08:44:55.399 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: ceph-volume 2026-03-10T08:44:55.399 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-03-10T08:44:55.402 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-10T08:44:55.403 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-03-10T08:44:55.403 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-10T08:44:55.417 INFO:teuthology.orchestra.run.vm06.stdout:Dependencies resolved. 2026-03-10T08:44:55.418 INFO:teuthology.orchestra.run.vm06.stdout:================================================================================ 2026-03-10T08:44:55.418 INFO:teuthology.orchestra.run.vm06.stdout: Package Arch Version Repo Size 2026-03-10T08:44:55.418 INFO:teuthology.orchestra.run.vm06.stdout:================================================================================ 2026-03-10T08:44:55.418 INFO:teuthology.orchestra.run.vm06.stdout:Removing: 2026-03-10T08:44:55.418 INFO:teuthology.orchestra.run.vm06.stdout: librados-devel x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 456 k 2026-03-10T08:44:55.418 INFO:teuthology.orchestra.run.vm06.stdout:Removing dependent packages: 2026-03-10T08:44:55.418 INFO:teuthology.orchestra.run.vm06.stdout: libcephfs-devel x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 153 k 2026-03-10T08:44:55.418 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:44:55.418 INFO:teuthology.orchestra.run.vm06.stdout:Transaction Summary 2026-03-10T08:44:55.418 INFO:teuthology.orchestra.run.vm06.stdout:================================================================================ 2026-03-10T08:44:55.418 INFO:teuthology.orchestra.run.vm06.stdout:Remove 2 Packages 2026-03-10T08:44:55.418 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:44:55.418 INFO:teuthology.orchestra.run.vm06.stdout:Freed space: 610 k 2026-03-10T08:44:55.418 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction check 2026-03-10T08:44:55.420 INFO:teuthology.orchestra.run.vm06.stdout:Transaction check succeeded. 2026-03-10T08:44:55.420 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction test 2026-03-10T08:44:55.430 INFO:teuthology.orchestra.run.vm06.stdout:Transaction test succeeded. 2026-03-10T08:44:55.430 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction 2026-03-10T08:44:55.455 INFO:teuthology.orchestra.run.vm06.stdout: Preparing : 1/1 2026-03-10T08:44:55.457 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T08:44:55.470 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-10T08:44:55.526 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-10T08:44:55.526 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T08:44:55.577 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-10T08:44:55.577 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:44:55.577 INFO:teuthology.orchestra.run.vm06.stdout:Removed: 2026-03-10T08:44:55.577 INFO:teuthology.orchestra.run.vm06.stdout: libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:44:55.577 INFO:teuthology.orchestra.run.vm06.stdout: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:44:55.577 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:44:55.577 INFO:teuthology.orchestra.run.vm06.stdout:Complete! 2026-03-10T08:44:55.596 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-10T08:44:55.597 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-10T08:44:55.597 INFO:teuthology.orchestra.run.vm03.stdout: Package Arch Version Repo Size 2026-03-10T08:44:55.597 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-10T08:44:55.597 INFO:teuthology.orchestra.run.vm03.stdout:Removing: 2026-03-10T08:44:55.597 INFO:teuthology.orchestra.run.vm03.stdout: librados-devel x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 456 k 2026-03-10T08:44:55.597 INFO:teuthology.orchestra.run.vm03.stdout:Removing dependent packages: 2026-03-10T08:44:55.597 INFO:teuthology.orchestra.run.vm03.stdout: libcephfs-devel x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 153 k 2026-03-10T08:44:55.597 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:44:55.597 INFO:teuthology.orchestra.run.vm03.stdout:Transaction Summary 2026-03-10T08:44:55.597 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-10T08:44:55.597 INFO:teuthology.orchestra.run.vm03.stdout:Remove 2 Packages 2026-03-10T08:44:55.597 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:44:55.597 INFO:teuthology.orchestra.run.vm03.stdout:Freed space: 610 k 2026-03-10T08:44:55.597 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction check 2026-03-10T08:44:55.599 INFO:teuthology.orchestra.run.vm03.stdout:Transaction check succeeded. 2026-03-10T08:44:55.599 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction test 2026-03-10T08:44:55.609 INFO:teuthology.orchestra.run.vm03.stdout:Transaction test succeeded. 2026-03-10T08:44:55.610 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction 2026-03-10T08:44:55.635 INFO:teuthology.orchestra.run.vm03.stdout: Preparing : 1/1 2026-03-10T08:44:55.637 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T08:44:55.651 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-10T08:44:55.710 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-10T08:44:55.710 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T08:44:55.753 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-10T08:44:55.753 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:44:55.753 INFO:teuthology.orchestra.run.vm03.stdout:Removed: 2026-03-10T08:44:55.753 INFO:teuthology.orchestra.run.vm03.stdout: libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:44:55.753 INFO:teuthology.orchestra.run.vm03.stdout: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:44:55.753 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:44:55.754 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-10T08:44:55.771 INFO:teuthology.orchestra.run.vm06.stdout:Dependencies resolved. 2026-03-10T08:44:55.772 INFO:teuthology.orchestra.run.vm06.stdout:================================================================================ 2026-03-10T08:44:55.772 INFO:teuthology.orchestra.run.vm06.stdout: Package Arch Version Repo Size 2026-03-10T08:44:55.772 INFO:teuthology.orchestra.run.vm06.stdout:================================================================================ 2026-03-10T08:44:55.772 INFO:teuthology.orchestra.run.vm06.stdout:Removing: 2026-03-10T08:44:55.772 INFO:teuthology.orchestra.run.vm06.stdout: libcephfs2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.0 M 2026-03-10T08:44:55.772 INFO:teuthology.orchestra.run.vm06.stdout:Removing dependent packages: 2026-03-10T08:44:55.772 INFO:teuthology.orchestra.run.vm06.stdout: python3-cephfs x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 514 k 2026-03-10T08:44:55.772 INFO:teuthology.orchestra.run.vm06.stdout:Removing unused dependencies: 2026-03-10T08:44:55.772 INFO:teuthology.orchestra.run.vm06.stdout: python3-ceph-argparse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 187 k 2026-03-10T08:44:55.772 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:44:55.772 INFO:teuthology.orchestra.run.vm06.stdout:Transaction Summary 2026-03-10T08:44:55.772 INFO:teuthology.orchestra.run.vm06.stdout:================================================================================ 2026-03-10T08:44:55.772 INFO:teuthology.orchestra.run.vm06.stdout:Remove 3 Packages 2026-03-10T08:44:55.772 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:44:55.772 INFO:teuthology.orchestra.run.vm06.stdout:Freed space: 3.7 M 2026-03-10T08:44:55.772 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction check 2026-03-10T08:44:55.774 INFO:teuthology.orchestra.run.vm06.stdout:Transaction check succeeded. 2026-03-10T08:44:55.774 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction test 2026-03-10T08:44:55.790 INFO:teuthology.orchestra.run.vm06.stdout:Transaction test succeeded. 2026-03-10T08:44:55.791 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction 2026-03-10T08:44:55.821 INFO:teuthology.orchestra.run.vm06.stdout: Preparing : 1/1 2026-03-10T08:44:55.823 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 1/3 2026-03-10T08:44:55.824 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86 2/3 2026-03-10T08:44:55.824 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-10T08:44:55.888 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-10T08:44:55.888 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 1/3 2026-03-10T08:44:55.888 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86 2/3 2026-03-10T08:44:55.933 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-10T08:44:55.933 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:44:55.933 INFO:teuthology.orchestra.run.vm06.stdout:Removed: 2026-03-10T08:44:55.933 INFO:teuthology.orchestra.run.vm06.stdout: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:44:55.934 INFO:teuthology.orchestra.run.vm06.stdout: python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:44:55.934 INFO:teuthology.orchestra.run.vm06.stdout: python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:44:55.934 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:44:55.934 INFO:teuthology.orchestra.run.vm06.stdout:Complete! 2026-03-10T08:44:55.960 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-10T08:44:55.961 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-10T08:44:55.961 INFO:teuthology.orchestra.run.vm03.stdout: Package Arch Version Repo Size 2026-03-10T08:44:55.961 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-10T08:44:55.961 INFO:teuthology.orchestra.run.vm03.stdout:Removing: 2026-03-10T08:44:55.961 INFO:teuthology.orchestra.run.vm03.stdout: libcephfs2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.0 M 2026-03-10T08:44:55.961 INFO:teuthology.orchestra.run.vm03.stdout:Removing dependent packages: 2026-03-10T08:44:55.961 INFO:teuthology.orchestra.run.vm03.stdout: python3-cephfs x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 514 k 2026-03-10T08:44:55.961 INFO:teuthology.orchestra.run.vm03.stdout:Removing unused dependencies: 2026-03-10T08:44:55.961 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-argparse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 187 k 2026-03-10T08:44:55.961 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:44:55.961 INFO:teuthology.orchestra.run.vm03.stdout:Transaction Summary 2026-03-10T08:44:55.961 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-10T08:44:55.961 INFO:teuthology.orchestra.run.vm03.stdout:Remove 3 Packages 2026-03-10T08:44:55.961 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:44:55.961 INFO:teuthology.orchestra.run.vm03.stdout:Freed space: 3.7 M 2026-03-10T08:44:55.961 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction check 2026-03-10T08:44:55.963 INFO:teuthology.orchestra.run.vm03.stdout:Transaction check succeeded. 2026-03-10T08:44:55.963 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction test 2026-03-10T08:44:55.982 INFO:teuthology.orchestra.run.vm03.stdout:Transaction test succeeded. 2026-03-10T08:44:55.982 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction 2026-03-10T08:44:56.015 INFO:teuthology.orchestra.run.vm03.stdout: Preparing : 1/1 2026-03-10T08:44:56.017 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 1/3 2026-03-10T08:44:56.019 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86 2/3 2026-03-10T08:44:56.019 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-10T08:44:56.080 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-10T08:44:56.080 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 1/3 2026-03-10T08:44:56.080 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86 2/3 2026-03-10T08:44:56.118 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-10T08:44:56.118 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:44:56.118 INFO:teuthology.orchestra.run.vm03.stdout:Removed: 2026-03-10T08:44:56.118 INFO:teuthology.orchestra.run.vm03.stdout: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:44:56.118 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:44:56.118 INFO:teuthology.orchestra.run.vm03.stdout: python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:44:56.118 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:44:56.118 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-10T08:44:56.121 INFO:teuthology.orchestra.run.vm06.stdout:No match for argument: libcephfs-devel 2026-03-10T08:44:56.121 INFO:teuthology.orchestra.run.vm06.stderr:No packages marked for removal. 2026-03-10T08:44:56.124 INFO:teuthology.orchestra.run.vm06.stdout:Dependencies resolved. 2026-03-10T08:44:56.124 INFO:teuthology.orchestra.run.vm06.stdout:Nothing to do. 2026-03-10T08:44:56.124 INFO:teuthology.orchestra.run.vm06.stdout:Complete! 2026-03-10T08:44:56.289 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: libcephfs-devel 2026-03-10T08:44:56.289 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-03-10T08:44:56.293 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-10T08:44:56.293 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-03-10T08:44:56.293 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-10T08:44:56.305 INFO:teuthology.orchestra.run.vm06.stdout:Dependencies resolved. 2026-03-10T08:44:56.306 INFO:teuthology.orchestra.run.vm06.stdout:================================================================================ 2026-03-10T08:44:56.306 INFO:teuthology.orchestra.run.vm06.stdout: Package Arch Version Repository Size 2026-03-10T08:44:56.306 INFO:teuthology.orchestra.run.vm06.stdout:================================================================================ 2026-03-10T08:44:56.306 INFO:teuthology.orchestra.run.vm06.stdout:Removing: 2026-03-10T08:44:56.306 INFO:teuthology.orchestra.run.vm06.stdout: librados2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 12 M 2026-03-10T08:44:56.306 INFO:teuthology.orchestra.run.vm06.stdout:Removing dependent packages: 2026-03-10T08:44:56.306 INFO:teuthology.orchestra.run.vm06.stdout: python3-rados x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.1 M 2026-03-10T08:44:56.306 INFO:teuthology.orchestra.run.vm06.stdout: python3-rbd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.1 M 2026-03-10T08:44:56.306 INFO:teuthology.orchestra.run.vm06.stdout: python3-rgw x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 265 k 2026-03-10T08:44:56.306 INFO:teuthology.orchestra.run.vm06.stdout: qemu-kvm-block-rbd x86_64 17:10.1.0-15.el9 @appstream 37 k 2026-03-10T08:44:56.306 INFO:teuthology.orchestra.run.vm06.stdout: rbd-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 227 k 2026-03-10T08:44:56.306 INFO:teuthology.orchestra.run.vm06.stdout: rbd-nbd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 490 k 2026-03-10T08:44:56.306 INFO:teuthology.orchestra.run.vm06.stdout:Removing unused dependencies: 2026-03-10T08:44:56.306 INFO:teuthology.orchestra.run.vm06.stdout: boost-program-options x86_64 1.75.0-13.el9 @appstream 276 k 2026-03-10T08:44:56.307 INFO:teuthology.orchestra.run.vm06.stdout: libarrow x86_64 9.0.0-15.el9 @epel 18 M 2026-03-10T08:44:56.307 INFO:teuthology.orchestra.run.vm06.stdout: libarrow-doc noarch 9.0.0-15.el9 @epel 122 k 2026-03-10T08:44:56.307 INFO:teuthology.orchestra.run.vm06.stdout: libnbd x86_64 1.20.3-4.el9 @appstream 453 k 2026-03-10T08:44:56.307 INFO:teuthology.orchestra.run.vm06.stdout: libpmemobj x86_64 1.12.1-1.el9 @appstream 383 k 2026-03-10T08:44:56.307 INFO:teuthology.orchestra.run.vm06.stdout: librabbitmq x86_64 0.11.0-7.el9 @appstream 102 k 2026-03-10T08:44:56.307 INFO:teuthology.orchestra.run.vm06.stdout: librbd1 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 13 M 2026-03-10T08:44:56.307 INFO:teuthology.orchestra.run.vm06.stdout: librdkafka x86_64 1.6.1-102.el9 @appstream 2.0 M 2026-03-10T08:44:56.307 INFO:teuthology.orchestra.run.vm06.stdout: librgw2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 19 M 2026-03-10T08:44:56.307 INFO:teuthology.orchestra.run.vm06.stdout: lttng-ust x86_64 2.12.0-6.el9 @appstream 1.0 M 2026-03-10T08:44:56.307 INFO:teuthology.orchestra.run.vm06.stdout: parquet-libs x86_64 9.0.0-15.el9 @epel 2.8 M 2026-03-10T08:44:56.307 INFO:teuthology.orchestra.run.vm06.stdout: re2 x86_64 1:20211101-20.el9 @epel 472 k 2026-03-10T08:44:56.307 INFO:teuthology.orchestra.run.vm06.stdout: thrift x86_64 0.15.0-4.el9 @epel 4.8 M 2026-03-10T08:44:56.307 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:44:56.307 INFO:teuthology.orchestra.run.vm06.stdout:Transaction Summary 2026-03-10T08:44:56.307 INFO:teuthology.orchestra.run.vm06.stdout:================================================================================ 2026-03-10T08:44:56.307 INFO:teuthology.orchestra.run.vm06.stdout:Remove 20 Packages 2026-03-10T08:44:56.307 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:44:56.307 INFO:teuthology.orchestra.run.vm06.stdout:Freed space: 79 M 2026-03-10T08:44:56.307 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction check 2026-03-10T08:44:56.311 INFO:teuthology.orchestra.run.vm06.stdout:Transaction check succeeded. 2026-03-10T08:44:56.311 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction test 2026-03-10T08:44:56.332 INFO:teuthology.orchestra.run.vm06.stdout:Transaction test succeeded. 2026-03-10T08:44:56.333 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction 2026-03-10T08:44:56.373 INFO:teuthology.orchestra.run.vm06.stdout: Preparing : 1/1 2026-03-10T08:44:56.375 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 1/20 2026-03-10T08:44:56.378 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2/20 2026-03-10T08:44:56.381 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 3/20 2026-03-10T08:44:56.381 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/20 2026-03-10T08:44:56.393 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/20 2026-03-10T08:44:56.395 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : parquet-libs-9.0.0-15.el9.x86_64 5/20 2026-03-10T08:44:56.397 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 6/20 2026-03-10T08:44:56.400 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 7/20 2026-03-10T08:44:56.401 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 8/20 2026-03-10T08:44:56.404 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : libarrow-doc-9.0.0-15.el9.noarch 9/20 2026-03-10T08:44:56.404 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-10T08:44:56.416 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-10T08:44:56.416 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 11/20 2026-03-10T08:44:56.416 INFO:teuthology.orchestra.run.vm06.stdout:warning: file /etc/ceph: remove failed: No such file or directory 2026-03-10T08:44:56.416 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:44:56.429 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 11/20 2026-03-10T08:44:56.432 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : libarrow-9.0.0-15.el9.x86_64 12/20 2026-03-10T08:44:56.435 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : re2-1:20211101-20.el9.x86_64 13/20 2026-03-10T08:44:56.439 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : lttng-ust-2.12.0-6.el9.x86_64 14/20 2026-03-10T08:44:56.442 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : thrift-0.15.0-4.el9.x86_64 15/20 2026-03-10T08:44:56.444 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : libnbd-1.20.3-4.el9.x86_64 16/20 2026-03-10T08:44:56.446 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : libpmemobj-1.12.1-1.el9.x86_64 17/20 2026-03-10T08:44:56.448 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : boost-program-options-1.75.0-13.el9.x86_64 18/20 2026-03-10T08:44:56.450 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : librabbitmq-0.11.0-7.el9.x86_64 19/20 2026-03-10T08:44:56.463 INFO:teuthology.orchestra.run.vm06.stdout: Erasing : librdkafka-1.6.1-102.el9.x86_64 20/20 2026-03-10T08:44:56.465 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-10T08:44:56.466 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-10T08:44:56.466 INFO:teuthology.orchestra.run.vm03.stdout: Package Arch Version Repository Size 2026-03-10T08:44:56.466 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-10T08:44:56.466 INFO:teuthology.orchestra.run.vm03.stdout:Removing: 2026-03-10T08:44:56.467 INFO:teuthology.orchestra.run.vm03.stdout: librados2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 12 M 2026-03-10T08:44:56.467 INFO:teuthology.orchestra.run.vm03.stdout:Removing dependent packages: 2026-03-10T08:44:56.467 INFO:teuthology.orchestra.run.vm03.stdout: python3-rados x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.1 M 2026-03-10T08:44:56.467 INFO:teuthology.orchestra.run.vm03.stdout: python3-rbd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.1 M 2026-03-10T08:44:56.467 INFO:teuthology.orchestra.run.vm03.stdout: python3-rgw x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 265 k 2026-03-10T08:44:56.467 INFO:teuthology.orchestra.run.vm03.stdout: qemu-kvm-block-rbd x86_64 17:10.1.0-15.el9 @appstream 37 k 2026-03-10T08:44:56.467 INFO:teuthology.orchestra.run.vm03.stdout: rbd-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 227 k 2026-03-10T08:44:56.467 INFO:teuthology.orchestra.run.vm03.stdout: rbd-nbd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 490 k 2026-03-10T08:44:56.467 INFO:teuthology.orchestra.run.vm03.stdout:Removing unused dependencies: 2026-03-10T08:44:56.467 INFO:teuthology.orchestra.run.vm03.stdout: boost-program-options x86_64 1.75.0-13.el9 @appstream 276 k 2026-03-10T08:44:56.467 INFO:teuthology.orchestra.run.vm03.stdout: libarrow x86_64 9.0.0-15.el9 @epel 18 M 2026-03-10T08:44:56.467 INFO:teuthology.orchestra.run.vm03.stdout: libarrow-doc noarch 9.0.0-15.el9 @epel 122 k 2026-03-10T08:44:56.467 INFO:teuthology.orchestra.run.vm03.stdout: libnbd x86_64 1.20.3-4.el9 @appstream 453 k 2026-03-10T08:44:56.467 INFO:teuthology.orchestra.run.vm03.stdout: libpmemobj x86_64 1.12.1-1.el9 @appstream 383 k 2026-03-10T08:44:56.467 INFO:teuthology.orchestra.run.vm03.stdout: librabbitmq x86_64 0.11.0-7.el9 @appstream 102 k 2026-03-10T08:44:56.467 INFO:teuthology.orchestra.run.vm03.stdout: librbd1 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 13 M 2026-03-10T08:44:56.467 INFO:teuthology.orchestra.run.vm03.stdout: librdkafka x86_64 1.6.1-102.el9 @appstream 2.0 M 2026-03-10T08:44:56.467 INFO:teuthology.orchestra.run.vm03.stdout: librgw2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 19 M 2026-03-10T08:44:56.467 INFO:teuthology.orchestra.run.vm03.stdout: lttng-ust x86_64 2.12.0-6.el9 @appstream 1.0 M 2026-03-10T08:44:56.467 INFO:teuthology.orchestra.run.vm03.stdout: parquet-libs x86_64 9.0.0-15.el9 @epel 2.8 M 2026-03-10T08:44:56.467 INFO:teuthology.orchestra.run.vm03.stdout: re2 x86_64 1:20211101-20.el9 @epel 472 k 2026-03-10T08:44:56.467 INFO:teuthology.orchestra.run.vm03.stdout: thrift x86_64 0.15.0-4.el9 @epel 4.8 M 2026-03-10T08:44:56.467 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:44:56.467 INFO:teuthology.orchestra.run.vm03.stdout:Transaction Summary 2026-03-10T08:44:56.467 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-10T08:44:56.467 INFO:teuthology.orchestra.run.vm03.stdout:Remove 20 Packages 2026-03-10T08:44:56.467 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:44:56.467 INFO:teuthology.orchestra.run.vm03.stdout:Freed space: 79 M 2026-03-10T08:44:56.467 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction check 2026-03-10T08:44:56.471 INFO:teuthology.orchestra.run.vm03.stdout:Transaction check succeeded. 2026-03-10T08:44:56.471 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction test 2026-03-10T08:44:56.493 INFO:teuthology.orchestra.run.vm03.stdout:Transaction test succeeded. 2026-03-10T08:44:56.493 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction 2026-03-10T08:44:56.525 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: librdkafka-1.6.1-102.el9.x86_64 20/20 2026-03-10T08:44:56.526 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 1/20 2026-03-10T08:44:56.526 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 2/20 2026-03-10T08:44:56.526 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 3/20 2026-03-10T08:44:56.526 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 4/20 2026-03-10T08:44:56.526 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 5/20 2026-03-10T08:44:56.526 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 6/20 2026-03-10T08:44:56.526 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 7/20 2026-03-10T08:44:56.526 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 8/20 2026-03-10T08:44:56.526 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 9/20 2026-03-10T08:44:56.526 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-10T08:44:56.526 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 11/20 2026-03-10T08:44:56.526 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 12/20 2026-03-10T08:44:56.526 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 13/20 2026-03-10T08:44:56.526 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 14/20 2026-03-10T08:44:56.526 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 15/20 2026-03-10T08:44:56.526 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 16/20 2026-03-10T08:44:56.526 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 17/20 2026-03-10T08:44:56.526 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 18/20 2026-03-10T08:44:56.526 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : re2-1:20211101-20.el9.x86_64 19/20 2026-03-10T08:44:56.534 INFO:teuthology.orchestra.run.vm03.stdout: Preparing : 1/1 2026-03-10T08:44:56.537 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 1/20 2026-03-10T08:44:56.539 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2/20 2026-03-10T08:44:56.542 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 3/20 2026-03-10T08:44:56.542 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/20 2026-03-10T08:44:56.559 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/20 2026-03-10T08:44:56.561 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : parquet-libs-9.0.0-15.el9.x86_64 5/20 2026-03-10T08:44:56.563 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 6/20 2026-03-10T08:44:56.565 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 7/20 2026-03-10T08:44:56.566 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 8/20 2026-03-10T08:44:56.569 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : libarrow-doc-9.0.0-15.el9.noarch 9/20 2026-03-10T08:44:56.569 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-10T08:44:56.572 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 20/20 2026-03-10T08:44:56.573 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:44:56.573 INFO:teuthology.orchestra.run.vm06.stdout:Removed: 2026-03-10T08:44:56.573 INFO:teuthology.orchestra.run.vm06.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-10T08:44:56.573 INFO:teuthology.orchestra.run.vm06.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-10T08:44:56.573 INFO:teuthology.orchestra.run.vm06.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-10T08:44:56.573 INFO:teuthology.orchestra.run.vm06.stdout: libnbd-1.20.3-4.el9.x86_64 2026-03-10T08:44:56.573 INFO:teuthology.orchestra.run.vm06.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-10T08:44:56.573 INFO:teuthology.orchestra.run.vm06.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-10T08:44:56.573 INFO:teuthology.orchestra.run.vm06.stdout: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:44:56.573 INFO:teuthology.orchestra.run.vm06.stdout: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:44:56.573 INFO:teuthology.orchestra.run.vm06.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-10T08:44:56.573 INFO:teuthology.orchestra.run.vm06.stdout: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:44:56.573 INFO:teuthology.orchestra.run.vm06.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-10T08:44:56.573 INFO:teuthology.orchestra.run.vm06.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-10T08:44:56.573 INFO:teuthology.orchestra.run.vm06.stdout: python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:44:56.573 INFO:teuthology.orchestra.run.vm06.stdout: python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:44:56.573 INFO:teuthology.orchestra.run.vm06.stdout: python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:44:56.573 INFO:teuthology.orchestra.run.vm06.stdout: qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 2026-03-10T08:44:56.573 INFO:teuthology.orchestra.run.vm06.stdout: rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:44:56.573 INFO:teuthology.orchestra.run.vm06.stdout: rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:44:56.573 INFO:teuthology.orchestra.run.vm06.stdout: re2-1:20211101-20.el9.x86_64 2026-03-10T08:44:56.573 INFO:teuthology.orchestra.run.vm06.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-10T08:44:56.573 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-10T08:44:56.573 INFO:teuthology.orchestra.run.vm06.stdout:Complete! 2026-03-10T08:44:56.584 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-10T08:44:56.584 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 11/20 2026-03-10T08:44:56.584 INFO:teuthology.orchestra.run.vm03.stdout:warning: file /etc/ceph: remove failed: No such file or directory 2026-03-10T08:44:56.584 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:44:56.598 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 11/20 2026-03-10T08:44:56.600 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : libarrow-9.0.0-15.el9.x86_64 12/20 2026-03-10T08:44:56.604 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : re2-1:20211101-20.el9.x86_64 13/20 2026-03-10T08:44:56.607 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : lttng-ust-2.12.0-6.el9.x86_64 14/20 2026-03-10T08:44:56.610 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : thrift-0.15.0-4.el9.x86_64 15/20 2026-03-10T08:44:56.613 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : libnbd-1.20.3-4.el9.x86_64 16/20 2026-03-10T08:44:56.615 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : libpmemobj-1.12.1-1.el9.x86_64 17/20 2026-03-10T08:44:56.617 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : boost-program-options-1.75.0-13.el9.x86_64 18/20 2026-03-10T08:44:56.619 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : librabbitmq-0.11.0-7.el9.x86_64 19/20 2026-03-10T08:44:56.633 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : librdkafka-1.6.1-102.el9.x86_64 20/20 2026-03-10T08:44:56.693 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: librdkafka-1.6.1-102.el9.x86_64 20/20 2026-03-10T08:44:56.693 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 1/20 2026-03-10T08:44:56.693 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 2/20 2026-03-10T08:44:56.693 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 3/20 2026-03-10T08:44:56.693 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 4/20 2026-03-10T08:44:56.693 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 5/20 2026-03-10T08:44:56.693 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 6/20 2026-03-10T08:44:56.693 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 7/20 2026-03-10T08:44:56.693 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 8/20 2026-03-10T08:44:56.693 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 9/20 2026-03-10T08:44:56.693 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-10T08:44:56.693 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 11/20 2026-03-10T08:44:56.693 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 12/20 2026-03-10T08:44:56.693 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 13/20 2026-03-10T08:44:56.693 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 14/20 2026-03-10T08:44:56.693 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 15/20 2026-03-10T08:44:56.693 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 16/20 2026-03-10T08:44:56.693 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 17/20 2026-03-10T08:44:56.693 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 18/20 2026-03-10T08:44:56.693 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : re2-1:20211101-20.el9.x86_64 19/20 2026-03-10T08:44:56.737 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 20/20 2026-03-10T08:44:56.737 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:44:56.737 INFO:teuthology.orchestra.run.vm03.stdout:Removed: 2026-03-10T08:44:56.737 INFO:teuthology.orchestra.run.vm03.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-10T08:44:56.737 INFO:teuthology.orchestra.run.vm03.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-10T08:44:56.737 INFO:teuthology.orchestra.run.vm03.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-10T08:44:56.737 INFO:teuthology.orchestra.run.vm03.stdout: libnbd-1.20.3-4.el9.x86_64 2026-03-10T08:44:56.737 INFO:teuthology.orchestra.run.vm03.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-10T08:44:56.737 INFO:teuthology.orchestra.run.vm03.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-10T08:44:56.737 INFO:teuthology.orchestra.run.vm03.stdout: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:44:56.737 INFO:teuthology.orchestra.run.vm03.stdout: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:44:56.737 INFO:teuthology.orchestra.run.vm03.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-10T08:44:56.737 INFO:teuthology.orchestra.run.vm03.stdout: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:44:56.737 INFO:teuthology.orchestra.run.vm03.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-10T08:44:56.737 INFO:teuthology.orchestra.run.vm03.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-10T08:44:56.737 INFO:teuthology.orchestra.run.vm03.stdout: python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:44:56.737 INFO:teuthology.orchestra.run.vm03.stdout: python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:44:56.737 INFO:teuthology.orchestra.run.vm03.stdout: python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:44:56.737 INFO:teuthology.orchestra.run.vm03.stdout: qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 2026-03-10T08:44:56.737 INFO:teuthology.orchestra.run.vm03.stdout: rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:44:56.737 INFO:teuthology.orchestra.run.vm03.stdout: rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T08:44:56.737 INFO:teuthology.orchestra.run.vm03.stdout: re2-1:20211101-20.el9.x86_64 2026-03-10T08:44:56.737 INFO:teuthology.orchestra.run.vm03.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-10T08:44:56.737 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T08:44:56.737 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-10T08:44:56.771 INFO:teuthology.orchestra.run.vm06.stdout:No match for argument: librbd1 2026-03-10T08:44:56.771 INFO:teuthology.orchestra.run.vm06.stderr:No packages marked for removal. 2026-03-10T08:44:56.774 INFO:teuthology.orchestra.run.vm06.stdout:Dependencies resolved. 2026-03-10T08:44:56.774 INFO:teuthology.orchestra.run.vm06.stdout:Nothing to do. 2026-03-10T08:44:56.775 INFO:teuthology.orchestra.run.vm06.stdout:Complete! 2026-03-10T08:44:56.960 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: librbd1 2026-03-10T08:44:56.960 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-03-10T08:44:56.962 INFO:teuthology.orchestra.run.vm06.stdout:No match for argument: python3-rados 2026-03-10T08:44:56.962 INFO:teuthology.orchestra.run.vm06.stderr:No packages marked for removal. 2026-03-10T08:44:56.963 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-10T08:44:56.963 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-03-10T08:44:56.963 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-10T08:44:56.964 INFO:teuthology.orchestra.run.vm06.stdout:Dependencies resolved. 2026-03-10T08:44:56.965 INFO:teuthology.orchestra.run.vm06.stdout:Nothing to do. 2026-03-10T08:44:56.965 INFO:teuthology.orchestra.run.vm06.stdout:Complete! 2026-03-10T08:44:57.134 INFO:teuthology.orchestra.run.vm06.stdout:No match for argument: python3-rgw 2026-03-10T08:44:57.135 INFO:teuthology.orchestra.run.vm06.stderr:No packages marked for removal. 2026-03-10T08:44:57.137 INFO:teuthology.orchestra.run.vm06.stdout:Dependencies resolved. 2026-03-10T08:44:57.137 INFO:teuthology.orchestra.run.vm06.stdout:Nothing to do. 2026-03-10T08:44:57.137 INFO:teuthology.orchestra.run.vm06.stdout:Complete! 2026-03-10T08:44:57.139 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: python3-rados 2026-03-10T08:44:57.140 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-03-10T08:44:57.142 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-10T08:44:57.142 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-03-10T08:44:57.142 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-10T08:44:57.298 INFO:teuthology.orchestra.run.vm06.stdout:No match for argument: python3-cephfs 2026-03-10T08:44:57.299 INFO:teuthology.orchestra.run.vm06.stderr:No packages marked for removal. 2026-03-10T08:44:57.301 INFO:teuthology.orchestra.run.vm06.stdout:Dependencies resolved. 2026-03-10T08:44:57.301 INFO:teuthology.orchestra.run.vm06.stdout:Nothing to do. 2026-03-10T08:44:57.301 INFO:teuthology.orchestra.run.vm06.stdout:Complete! 2026-03-10T08:44:57.304 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: python3-rgw 2026-03-10T08:44:57.304 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-03-10T08:44:57.306 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-10T08:44:57.307 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-03-10T08:44:57.307 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-10T08:44:57.463 INFO:teuthology.orchestra.run.vm06.stdout:No match for argument: python3-rbd 2026-03-10T08:44:57.463 INFO:teuthology.orchestra.run.vm06.stderr:No packages marked for removal. 2026-03-10T08:44:57.465 INFO:teuthology.orchestra.run.vm06.stdout:Dependencies resolved. 2026-03-10T08:44:57.466 INFO:teuthology.orchestra.run.vm06.stdout:Nothing to do. 2026-03-10T08:44:57.466 INFO:teuthology.orchestra.run.vm06.stdout:Complete! 2026-03-10T08:44:57.469 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: python3-cephfs 2026-03-10T08:44:57.469 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-03-10T08:44:57.471 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-10T08:44:57.471 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-03-10T08:44:57.471 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-10T08:44:57.630 INFO:teuthology.orchestra.run.vm06.stdout:No match for argument: rbd-fuse 2026-03-10T08:44:57.630 INFO:teuthology.orchestra.run.vm06.stderr:No packages marked for removal. 2026-03-10T08:44:57.633 INFO:teuthology.orchestra.run.vm06.stdout:Dependencies resolved. 2026-03-10T08:44:57.633 INFO:teuthology.orchestra.run.vm06.stdout:Nothing to do. 2026-03-10T08:44:57.633 INFO:teuthology.orchestra.run.vm06.stdout:Complete! 2026-03-10T08:44:57.637 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: python3-rbd 2026-03-10T08:44:57.637 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-03-10T08:44:57.639 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-10T08:44:57.640 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-03-10T08:44:57.640 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-10T08:44:57.801 INFO:teuthology.orchestra.run.vm06.stdout:No match for argument: rbd-mirror 2026-03-10T08:44:57.801 INFO:teuthology.orchestra.run.vm06.stderr:No packages marked for removal. 2026-03-10T08:44:57.803 INFO:teuthology.orchestra.run.vm06.stdout:Dependencies resolved. 2026-03-10T08:44:57.804 INFO:teuthology.orchestra.run.vm06.stdout:Nothing to do. 2026-03-10T08:44:57.804 INFO:teuthology.orchestra.run.vm06.stdout:Complete! 2026-03-10T08:44:57.807 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: rbd-fuse 2026-03-10T08:44:57.807 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-03-10T08:44:57.809 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-10T08:44:57.810 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-03-10T08:44:57.810 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-10T08:44:57.966 INFO:teuthology.orchestra.run.vm06.stdout:No match for argument: rbd-nbd 2026-03-10T08:44:57.966 INFO:teuthology.orchestra.run.vm06.stderr:No packages marked for removal. 2026-03-10T08:44:57.968 INFO:teuthology.orchestra.run.vm06.stdout:Dependencies resolved. 2026-03-10T08:44:57.969 INFO:teuthology.orchestra.run.vm06.stdout:Nothing to do. 2026-03-10T08:44:57.969 INFO:teuthology.orchestra.run.vm06.stdout:Complete! 2026-03-10T08:44:57.973 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: rbd-mirror 2026-03-10T08:44:57.973 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-03-10T08:44:57.975 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-10T08:44:57.975 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-03-10T08:44:57.975 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-10T08:44:57.991 DEBUG:teuthology.orchestra.run.vm06:> sudo yum clean all 2026-03-10T08:44:58.115 INFO:teuthology.orchestra.run.vm06.stdout:56 files removed 2026-03-10T08:44:58.137 DEBUG:teuthology.orchestra.run.vm06:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-03-10T08:44:58.138 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: rbd-nbd 2026-03-10T08:44:58.138 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-03-10T08:44:58.140 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-10T08:44:58.141 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-03-10T08:44:58.141 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-10T08:44:58.162 DEBUG:teuthology.orchestra.run.vm06:> sudo yum clean expire-cache 2026-03-10T08:44:58.165 DEBUG:teuthology.orchestra.run.vm03:> sudo yum clean all 2026-03-10T08:44:58.292 INFO:teuthology.orchestra.run.vm03.stdout:56 files removed 2026-03-10T08:44:58.309 INFO:teuthology.orchestra.run.vm06.stdout:Cache was expired 2026-03-10T08:44:58.309 INFO:teuthology.orchestra.run.vm06.stdout:0 files removed 2026-03-10T08:44:58.314 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-03-10T08:44:58.328 DEBUG:teuthology.parallel:result is None 2026-03-10T08:44:58.336 DEBUG:teuthology.orchestra.run.vm03:> sudo yum clean expire-cache 2026-03-10T08:44:58.478 INFO:teuthology.orchestra.run.vm03.stdout:Cache was expired 2026-03-10T08:44:58.478 INFO:teuthology.orchestra.run.vm03.stdout:0 files removed 2026-03-10T08:44:58.493 DEBUG:teuthology.parallel:result is None 2026-03-10T08:44:58.493 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm03.local 2026-03-10T08:44:58.493 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm06.local 2026-03-10T08:44:58.493 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-03-10T08:44:58.493 DEBUG:teuthology.orchestra.run.vm06:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-03-10T08:44:58.518 DEBUG:teuthology.orchestra.run.vm03:> sudo mv -f /etc/yum/pluginconf.d/priorities.conf.orig /etc/yum/pluginconf.d/priorities.conf 2026-03-10T08:44:58.522 DEBUG:teuthology.orchestra.run.vm06:> sudo mv -f /etc/yum/pluginconf.d/priorities.conf.orig /etc/yum/pluginconf.d/priorities.conf 2026-03-10T08:44:58.581 DEBUG:teuthology.parallel:result is None 2026-03-10T08:44:58.589 DEBUG:teuthology.parallel:result is None 2026-03-10T08:44:58.589 DEBUG:teuthology.run_tasks:Unwinding manager clock 2026-03-10T08:44:58.591 INFO:teuthology.task.clock:Checking final clock skew... 2026-03-10T08:44:58.591 DEBUG:teuthology.orchestra.run.vm03:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T08:44:58.623 DEBUG:teuthology.orchestra.run.vm06:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T08:44:58.636 INFO:teuthology.orchestra.run.vm03.stderr:bash: line 1: ntpq: command not found 2026-03-10T08:44:58.646 INFO:teuthology.orchestra.run.vm06.stderr:bash: line 1: ntpq: command not found 2026-03-10T08:44:58.690 INFO:teuthology.orchestra.run.vm03.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-10T08:44:58.690 INFO:teuthology.orchestra.run.vm03.stdout:=============================================================================== 2026-03-10T08:44:58.690 INFO:teuthology.orchestra.run.vm03.stdout:^+ vps-nue1.orleans.ddnss.de 2 6 377 39 +1668us[+1673us] +/- 16ms 2026-03-10T08:44:58.690 INFO:teuthology.orchestra.run.vm03.stdout:^* ntp1.lwlcom.net 1 6 377 38 -3807us[-3803us] +/- 15ms 2026-03-10T08:44:58.690 INFO:teuthology.orchestra.run.vm03.stdout:^+ stage3.opensuse.org 3 6 377 36 +107us[ +107us] +/- 16ms 2026-03-10T08:44:58.690 INFO:teuthology.orchestra.run.vm03.stdout:^+ 139-162-156-95.ip.linode> 2 6 377 39 +4457us[+4462us] +/- 32ms 2026-03-10T08:44:58.691 INFO:teuthology.orchestra.run.vm06.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-10T08:44:58.691 INFO:teuthology.orchestra.run.vm06.stdout:=============================================================================== 2026-03-10T08:44:58.691 INFO:teuthology.orchestra.run.vm06.stdout:^* ntp1.lwlcom.net 1 6 377 36 -3781us[-3786us] +/- 15ms 2026-03-10T08:44:58.691 INFO:teuthology.orchestra.run.vm06.stdout:^+ stage3.opensuse.org 3 6 377 37 +124us[ +119us] +/- 16ms 2026-03-10T08:44:58.691 INFO:teuthology.orchestra.run.vm06.stdout:^+ 139-162-156-95.ip.linode> 2 6 377 37 +4330us[+4325us] +/- 32ms 2026-03-10T08:44:58.691 INFO:teuthology.orchestra.run.vm06.stdout:^+ vps-nue1.orleans.ddnss.de 2 6 377 38 +1892us[+1887us] +/- 16ms 2026-03-10T08:44:58.691 DEBUG:teuthology.run_tasks:Unwinding manager ansible.cephlab 2026-03-10T08:44:58.693 INFO:teuthology.task.ansible:Skipping ansible cleanup... 2026-03-10T08:44:58.693 DEBUG:teuthology.run_tasks:Unwinding manager selinux 2026-03-10T08:44:58.695 DEBUG:teuthology.run_tasks:Unwinding manager pcp 2026-03-10T08:44:58.697 DEBUG:teuthology.run_tasks:Unwinding manager internal.timer 2026-03-10T08:44:58.699 INFO:teuthology.task.internal:Duration was 925.084715 seconds 2026-03-10T08:44:58.699 DEBUG:teuthology.run_tasks:Unwinding manager internal.syslog 2026-03-10T08:44:58.701 INFO:teuthology.task.internal.syslog:Shutting down syslog monitoring... 2026-03-10T08:44:58.701 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-10T08:44:58.733 DEBUG:teuthology.orchestra.run.vm06:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-10T08:44:58.769 INFO:teuthology.orchestra.run.vm03.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-10T08:44:58.775 INFO:teuthology.orchestra.run.vm06.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-10T08:44:59.115 INFO:teuthology.task.internal.syslog:Checking logs for errors... 2026-03-10T08:44:59.115 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm03.local 2026-03-10T08:44:59.115 DEBUG:teuthology.orchestra.run.vm03:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-10T08:44:59.139 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm06.local 2026-03-10T08:44:59.140 DEBUG:teuthology.orchestra.run.vm06:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-10T08:44:59.165 INFO:teuthology.task.internal.syslog:Gathering journactl... 2026-03-10T08:44:59.165 DEBUG:teuthology.orchestra.run.vm03:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T08:44:59.181 DEBUG:teuthology.orchestra.run.vm06:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T08:44:59.630 INFO:teuthology.task.internal.syslog:Compressing syslogs... 2026-03-10T08:44:59.630 DEBUG:teuthology.orchestra.run.vm03:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T08:44:59.632 DEBUG:teuthology.orchestra.run.vm06:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T08:44:59.655 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T08:44:59.656 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T08:44:59.656 INFO:teuthology.orchestra.run.vm06.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-10T08:44:59.656 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T08:44:59.656 INFO:teuthology.orchestra.run.vm06.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-10T08:44:59.657 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T08:44:59.657 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T08:44:59.657 INFO:teuthology.orchestra.run.vm03.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-10T08:44:59.658 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T08:44:59.658 INFO:teuthology.orchestra.run.vm03.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0%/home/ubuntu/cephtest/archive/syslog/journalctl.log: -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-10T08:44:59.789 INFO:teuthology.orchestra.run.vm03.stderr: 97.8% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-10T08:44:59.790 INFO:teuthology.orchestra.run.vm06.stderr:/home/ubuntu/cephtest/archive/syslog/journalctl.log: 98.1% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-10T08:44:59.792 DEBUG:teuthology.run_tasks:Unwinding manager internal.sudo 2026-03-10T08:44:59.795 INFO:teuthology.task.internal:Restoring /etc/sudoers... 2026-03-10T08:44:59.795 DEBUG:teuthology.orchestra.run.vm03:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-10T08:44:59.855 DEBUG:teuthology.orchestra.run.vm06:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-10T08:44:59.878 DEBUG:teuthology.run_tasks:Unwinding manager internal.coredump 2026-03-10T08:44:59.881 DEBUG:teuthology.orchestra.run.vm03:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-10T08:44:59.898 DEBUG:teuthology.orchestra.run.vm06:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-10T08:44:59.922 INFO:teuthology.orchestra.run.vm03.stdout:kernel.core_pattern = core 2026-03-10T08:44:59.946 INFO:teuthology.orchestra.run.vm06.stdout:kernel.core_pattern = core 2026-03-10T08:44:59.960 DEBUG:teuthology.orchestra.run.vm03:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-10T08:44:59.991 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T08:44:59.991 DEBUG:teuthology.orchestra.run.vm06:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-10T08:45:00.016 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T08:45:00.016 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive 2026-03-10T08:45:00.019 INFO:teuthology.task.internal:Transferring archived files... 2026-03-10T08:45:00.019 DEBUG:teuthology.misc:Transferring archived files from vm03:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/964/remote/vm03 2026-03-10T08:45:00.019 DEBUG:teuthology.orchestra.run.vm03:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-10T08:45:00.062 DEBUG:teuthology.misc:Transferring archived files from vm06:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/964/remote/vm06 2026-03-10T08:45:00.062 DEBUG:teuthology.orchestra.run.vm06:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-10T08:45:00.088 INFO:teuthology.task.internal:Removing archive directory... 2026-03-10T08:45:00.088 DEBUG:teuthology.orchestra.run.vm03:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-10T08:45:00.103 DEBUG:teuthology.orchestra.run.vm06:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-10T08:45:00.143 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive_upload 2026-03-10T08:45:00.145 INFO:teuthology.task.internal:Not uploading archives. 2026-03-10T08:45:00.145 DEBUG:teuthology.run_tasks:Unwinding manager internal.base 2026-03-10T08:45:00.148 INFO:teuthology.task.internal:Tidying up after the test... 2026-03-10T08:45:00.148 DEBUG:teuthology.orchestra.run.vm03:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-10T08:45:00.159 DEBUG:teuthology.orchestra.run.vm06:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-10T08:45:00.174 INFO:teuthology.orchestra.run.vm03.stdout: 8532145 0 drwxr-xr-x 2 ubuntu ubuntu 6 Mar 10 08:45 /home/ubuntu/cephtest 2026-03-10T08:45:00.200 INFO:teuthology.orchestra.run.vm06.stdout: 8532145 0 drwxr-xr-x 2 ubuntu ubuntu 6 Mar 10 08:45 /home/ubuntu/cephtest 2026-03-10T08:45:00.200 DEBUG:teuthology.run_tasks:Unwinding manager console_log 2026-03-10T08:45:00.205 INFO:teuthology.run:Summary data: description: orch/cephadm/with-work/{0-distro/centos_9.stream fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_python} duration: 925.0847148895264 flavor: default owner: kyr success: true 2026-03-10T08:45:00.206 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-10T08:45:00.223 INFO:teuthology.run:pass